00:00:00.001 Started by upstream project "autotest-per-patch" build number 130922 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.049 The recommended git tool is: git 00:00:00.050 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.078 Fetching changes from the remote Git repository 00:00:00.079 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.125 Using shallow fetch with depth 1 00:00:00.125 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.125 > git --version # timeout=10 00:00:00.181 > git --version # 'git version 2.39.2' 00:00:00.181 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.356 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.369 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.380 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.380 > git config core.sparsecheckout # timeout=10 00:00:04.392 > git read-tree -mu HEAD # timeout=10 00:00:04.432 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.483 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.483 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:04.561 [Pipeline] Start of Pipeline 00:00:04.573 [Pipeline] library 00:00:04.574 Loading library shm_lib@master 00:00:04.574 Library shm_lib@master is cached. Copying from home. 00:00:04.592 [Pipeline] node 00:00:04.600 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.601 [Pipeline] { 00:00:04.612 [Pipeline] catchError 00:00:04.614 [Pipeline] { 00:00:04.626 [Pipeline] wrap 00:00:04.635 [Pipeline] { 00:00:04.643 [Pipeline] stage 00:00:04.645 [Pipeline] { (Prologue) 00:00:04.847 [Pipeline] sh 00:00:05.206 + logger -p user.info -t JENKINS-CI 00:00:05.230 [Pipeline] echo 00:00:05.231 Node: CYP12 00:00:05.239 [Pipeline] sh 00:00:05.552 [Pipeline] setCustomBuildProperty 00:00:05.563 [Pipeline] echo 00:00:05.564 Cleanup processes 00:00:05.570 [Pipeline] sh 00:00:05.859 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.859 893340 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.876 [Pipeline] sh 00:00:06.168 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.168 ++ grep -v 'sudo pgrep' 00:00:06.168 ++ awk '{print $1}' 00:00:06.168 + sudo kill -9 00:00:06.168 + true 00:00:06.186 [Pipeline] cleanWs 00:00:06.196 [WS-CLEANUP] Deleting project workspace... 00:00:06.196 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.202 [WS-CLEANUP] done 00:00:06.206 [Pipeline] setCustomBuildProperty 00:00:06.222 [Pipeline] sh 00:00:06.511 + sudo git config --global --replace-all safe.directory '*' 00:00:06.686 [Pipeline] httpRequest 00:00:07.094 [Pipeline] echo 00:00:07.096 Sorcerer 10.211.164.101 is alive 00:00:07.103 [Pipeline] retry 00:00:07.104 [Pipeline] { 00:00:07.117 [Pipeline] httpRequest 00:00:07.122 HttpMethod: GET 00:00:07.122 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.123 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.132 Response Code: HTTP/1.1 200 OK 00:00:07.132 Success: Status code 200 is in the accepted range: 200,404 00:00:07.132 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.514 [Pipeline] } 00:00:08.530 [Pipeline] // retry 00:00:08.537 [Pipeline] sh 00:00:08.826 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.843 [Pipeline] httpRequest 00:00:09.477 [Pipeline] echo 00:00:09.479 Sorcerer 10.211.164.101 is alive 00:00:09.488 [Pipeline] retry 00:00:09.490 [Pipeline] { 00:00:09.505 [Pipeline] httpRequest 00:00:09.511 HttpMethod: GET 00:00:09.511 URL: http://10.211.164.101/packages/spdk_6f51f621df8ccd69082a5568edbae458859a0b6b.tar.gz 00:00:09.512 Sending request to url: http://10.211.164.101/packages/spdk_6f51f621df8ccd69082a5568edbae458859a0b6b.tar.gz 00:00:09.516 Response Code: HTTP/1.1 200 OK 00:00:09.516 Success: Status code 200 is in the accepted range: 200,404 00:00:09.517 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6f51f621df8ccd69082a5568edbae458859a0b6b.tar.gz 00:00:35.424 [Pipeline] } 00:00:35.443 [Pipeline] // retry 00:00:35.469 [Pipeline] sh 00:00:35.768 + tar --no-same-owner -xf spdk_6f51f621df8ccd69082a5568edbae458859a0b6b.tar.gz 00:00:39.092 [Pipeline] sh 00:00:39.387 + git -C spdk log --oneline -n5 00:00:39.387 6f51f621d bdev/nvme: interrupt mode for PCIe nvme ctrlr 00:00:39.387 865972bb6 nvme: create, manage fd_group for nvme poll group 00:00:39.387 ba5b39cb2 thread: Extended options for spdk_interrupt_register 00:00:39.387 52e9db722 util: allow a fd_group to manage all its fds 00:00:39.387 6082eddb0 util: fix total fds to wait for 00:00:39.399 [Pipeline] } 00:00:39.414 [Pipeline] // stage 00:00:39.425 [Pipeline] stage 00:00:39.427 [Pipeline] { (Prepare) 00:00:39.443 [Pipeline] writeFile 00:00:39.458 [Pipeline] sh 00:00:39.750 + logger -p user.info -t JENKINS-CI 00:00:39.764 [Pipeline] sh 00:00:40.055 + logger -p user.info -t JENKINS-CI 00:00:40.068 [Pipeline] sh 00:00:40.358 + cat autorun-spdk.conf 00:00:40.358 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.358 SPDK_TEST_NVMF=1 00:00:40.358 SPDK_TEST_NVME_CLI=1 00:00:40.358 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.358 SPDK_TEST_NVMF_NICS=e810 00:00:40.358 SPDK_TEST_VFIOUSER=1 00:00:40.358 SPDK_RUN_UBSAN=1 00:00:40.358 NET_TYPE=phy 00:00:40.367 RUN_NIGHTLY=0 00:00:40.372 [Pipeline] readFile 00:00:40.395 [Pipeline] withEnv 00:00:40.397 [Pipeline] { 00:00:40.410 [Pipeline] sh 00:00:40.698 + set -ex 00:00:40.698 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:40.698 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:40.698 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.698 ++ SPDK_TEST_NVMF=1 00:00:40.698 ++ SPDK_TEST_NVME_CLI=1 00:00:40.698 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.698 ++ SPDK_TEST_NVMF_NICS=e810 00:00:40.698 ++ SPDK_TEST_VFIOUSER=1 00:00:40.698 ++ SPDK_RUN_UBSAN=1 00:00:40.698 ++ NET_TYPE=phy 00:00:40.698 ++ RUN_NIGHTLY=0 00:00:40.698 + case $SPDK_TEST_NVMF_NICS in 00:00:40.698 + DRIVERS=ice 00:00:40.698 + [[ tcp == \r\d\m\a ]] 00:00:40.698 + [[ -n ice ]] 00:00:40.698 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:40.698 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:40.698 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:40.698 rmmod: ERROR: Module irdma is not currently loaded 00:00:40.698 rmmod: ERROR: Module i40iw is not currently loaded 00:00:40.698 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:40.698 + true 00:00:40.698 + for D in $DRIVERS 00:00:40.698 + sudo modprobe ice 00:00:40.698 + exit 0 00:00:40.708 [Pipeline] } 00:00:40.724 [Pipeline] // withEnv 00:00:40.728 [Pipeline] } 00:00:40.742 [Pipeline] // stage 00:00:40.752 [Pipeline] catchError 00:00:40.754 [Pipeline] { 00:00:40.768 [Pipeline] timeout 00:00:40.768 Timeout set to expire in 1 hr 0 min 00:00:40.770 [Pipeline] { 00:00:40.783 [Pipeline] stage 00:00:40.784 [Pipeline] { (Tests) 00:00:40.797 [Pipeline] sh 00:00:41.087 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.087 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.087 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.087 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:41.087 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.087 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:41.087 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:41.087 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:41.087 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:41.087 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:41.087 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:41.087 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.087 + source /etc/os-release 00:00:41.087 ++ NAME='Fedora Linux' 00:00:41.087 ++ VERSION='39 (Cloud Edition)' 00:00:41.087 ++ ID=fedora 00:00:41.087 ++ VERSION_ID=39 00:00:41.087 ++ VERSION_CODENAME= 00:00:41.087 ++ PLATFORM_ID=platform:f39 00:00:41.087 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:41.087 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:41.087 ++ LOGO=fedora-logo-icon 00:00:41.087 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:41.087 ++ HOME_URL=https://fedoraproject.org/ 00:00:41.087 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:41.087 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:41.087 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:41.087 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:41.087 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:41.087 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:41.087 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:41.087 ++ SUPPORT_END=2024-11-12 00:00:41.087 ++ VARIANT='Cloud Edition' 00:00:41.087 ++ VARIANT_ID=cloud 00:00:41.087 + uname -a 00:00:41.087 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:41.087 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:44.390 Hugepages 00:00:44.390 node hugesize free / total 00:00:44.390 node0 1048576kB 0 / 0 00:00:44.390 node0 2048kB 0 / 0 00:00:44.390 node1 1048576kB 0 / 0 00:00:44.390 node1 2048kB 0 / 0 00:00:44.390 00:00:44.390 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:44.390 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:44.390 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:44.390 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:44.390 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:44.390 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:44.390 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:44.390 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:44.390 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:44.390 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:44.390 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:44.390 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:44.390 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:44.390 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:44.390 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:44.390 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:44.390 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:44.390 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:44.390 + rm -f /tmp/spdk-ld-path 00:00:44.390 + source autorun-spdk.conf 00:00:44.390 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.390 ++ SPDK_TEST_NVMF=1 00:00:44.390 ++ SPDK_TEST_NVME_CLI=1 00:00:44.390 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.390 ++ SPDK_TEST_NVMF_NICS=e810 00:00:44.390 ++ SPDK_TEST_VFIOUSER=1 00:00:44.390 ++ SPDK_RUN_UBSAN=1 00:00:44.390 ++ NET_TYPE=phy 00:00:44.390 ++ RUN_NIGHTLY=0 00:00:44.390 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:44.390 + [[ -n '' ]] 00:00:44.390 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:44.390 + for M in /var/spdk/build-*-manifest.txt 00:00:44.390 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:44.390 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:44.390 + for M in /var/spdk/build-*-manifest.txt 00:00:44.390 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:44.390 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:44.390 + for M in /var/spdk/build-*-manifest.txt 00:00:44.390 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:44.390 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:44.390 ++ uname 00:00:44.390 + [[ Linux == \L\i\n\u\x ]] 00:00:44.390 + sudo dmesg -T 00:00:44.390 + sudo dmesg --clear 00:00:44.390 + dmesg_pid=894345 00:00:44.390 + [[ Fedora Linux == FreeBSD ]] 00:00:44.390 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.390 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.390 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:44.390 + [[ -x /usr/src/fio-static/fio ]] 00:00:44.390 + export FIO_BIN=/usr/src/fio-static/fio 00:00:44.390 + FIO_BIN=/usr/src/fio-static/fio 00:00:44.390 + sudo dmesg -Tw 00:00:44.390 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:44.390 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:44.390 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:44.390 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.390 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.390 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:44.390 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.390 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.390 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:44.652 Test configuration: 00:00:44.652 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.652 SPDK_TEST_NVMF=1 00:00:44.652 SPDK_TEST_NVME_CLI=1 00:00:44.653 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.653 SPDK_TEST_NVMF_NICS=e810 00:00:44.653 SPDK_TEST_VFIOUSER=1 00:00:44.653 SPDK_RUN_UBSAN=1 00:00:44.653 NET_TYPE=phy 00:00:44.653 RUN_NIGHTLY=0 18:16:38 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:00:44.653 18:16:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:44.653 18:16:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:44.653 18:16:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:44.653 18:16:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:44.653 18:16:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:44.653 18:16:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.653 18:16:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.653 18:16:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.653 18:16:38 -- paths/export.sh@5 -- $ export PATH 00:00:44.653 18:16:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.653 18:16:38 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:44.653 18:16:38 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:44.653 18:16:38 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728404198.XXXXXX 00:00:44.653 18:16:38 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728404198.I9NuXk 00:00:44.653 18:16:38 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:44.653 18:16:38 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:44.653 18:16:38 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:44.653 18:16:38 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:44.653 18:16:38 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:44.653 18:16:38 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:44.653 18:16:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:44.653 18:16:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:44.653 18:16:38 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:44.653 18:16:38 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:44.653 18:16:38 -- pm/common@17 -- $ local monitor 00:00:44.653 18:16:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.653 18:16:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.653 18:16:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.653 18:16:38 -- pm/common@21 -- $ date +%s 00:00:44.653 18:16:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.653 18:16:38 -- pm/common@25 -- $ sleep 1 00:00:44.653 18:16:38 -- pm/common@21 -- $ date +%s 00:00:44.653 18:16:38 -- pm/common@21 -- $ date +%s 00:00:44.653 18:16:38 -- pm/common@21 -- $ date +%s 00:00:44.653 18:16:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728404198 00:00:44.653 18:16:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728404198 00:00:44.653 18:16:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728404198 00:00:44.653 18:16:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728404198 00:00:44.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728404198_collect-vmstat.pm.log 00:00:44.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728404198_collect-cpu-load.pm.log 00:00:44.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728404198_collect-cpu-temp.pm.log 00:00:44.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728404198_collect-bmc-pm.bmc.pm.log 00:00:45.598 18:16:39 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:45.598 18:16:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:45.598 18:16:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:45.598 18:16:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.598 18:16:39 -- spdk/autobuild.sh@16 -- $ date -u 00:00:45.598 Tue Oct 8 04:16:39 PM UTC 2024 00:00:45.598 18:16:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:45.598 v25.01-pre-53-g6f51f621d 00:00:45.598 18:16:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:45.598 18:16:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:45.598 18:16:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:45.598 18:16:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:45.598 18:16:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:45.598 18:16:39 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.860 ************************************ 00:00:45.860 START TEST ubsan 00:00:45.860 ************************************ 00:00:45.860 18:16:39 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:45.860 using ubsan 00:00:45.860 00:00:45.860 real 0m0.001s 00:00:45.860 user 0m0.001s 00:00:45.860 sys 0m0.000s 00:00:45.860 18:16:39 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:45.860 18:16:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:45.860 ************************************ 00:00:45.860 END TEST ubsan 00:00:45.860 ************************************ 00:00:45.860 18:16:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:45.860 18:16:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:45.860 18:16:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:45.860 18:16:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:45.860 18:16:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:45.860 18:16:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:45.860 18:16:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:45.860 18:16:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:45.860 18:16:39 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:45.860 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:45.860 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:46.434 Using 'verbs' RDMA provider 00:01:02.300 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:14.544 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:14.805 Creating mk/config.mk...done. 00:01:14.805 Creating mk/cc.flags.mk...done. 00:01:14.805 Type 'make' to build. 00:01:14.805 18:17:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:14.805 18:17:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:14.805 18:17:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:14.805 18:17:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.805 ************************************ 00:01:14.805 START TEST make 00:01:14.805 ************************************ 00:01:14.805 18:17:08 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:15.380 make[1]: Nothing to be done for 'all'. 00:01:16.771 The Meson build system 00:01:16.771 Version: 1.5.0 00:01:16.771 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:16.771 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:16.771 Build type: native build 00:01:16.771 Project name: libvfio-user 00:01:16.771 Project version: 0.0.1 00:01:16.771 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:16.771 C linker for the host machine: cc ld.bfd 2.40-14 00:01:16.771 Host machine cpu family: x86_64 00:01:16.771 Host machine cpu: x86_64 00:01:16.771 Run-time dependency threads found: YES 00:01:16.771 Library dl found: YES 00:01:16.771 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:16.771 Run-time dependency json-c found: YES 0.17 00:01:16.771 Run-time dependency cmocka found: YES 1.1.7 00:01:16.771 Program pytest-3 found: NO 00:01:16.771 Program flake8 found: NO 00:01:16.771 Program misspell-fixer found: NO 00:01:16.771 Program restructuredtext-lint found: NO 00:01:16.771 Program valgrind found: YES (/usr/bin/valgrind) 00:01:16.771 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:16.771 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:16.771 Compiler for C supports arguments -Wwrite-strings: YES 00:01:16.771 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:16.771 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:16.771 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:16.771 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:16.771 Build targets in project: 8 00:01:16.771 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:16.771 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:16.771 00:01:16.771 libvfio-user 0.0.1 00:01:16.771 00:01:16.772 User defined options 00:01:16.772 buildtype : debug 00:01:16.772 default_library: shared 00:01:16.772 libdir : /usr/local/lib 00:01:16.772 00:01:16.772 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:17.032 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:17.294 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:17.294 [2/37] Compiling C object samples/null.p/null.c.o 00:01:17.294 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:17.294 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:17.294 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:17.294 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:17.294 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:17.294 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:17.294 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:17.294 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:17.294 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:17.294 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:17.294 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:17.294 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:17.294 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:17.294 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:17.294 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:17.294 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:17.294 [19/37] Compiling C object samples/server.p/server.c.o 00:01:17.294 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:17.294 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:17.294 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:17.294 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:17.294 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:17.294 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:17.294 [26/37] Compiling C object samples/client.p/client.c.o 00:01:17.294 [27/37] Linking target samples/client 00:01:17.294 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:17.294 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:17.556 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:17.556 [31/37] Linking target test/unit_tests 00:01:17.556 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:17.556 [33/37] Linking target samples/gpio-pci-idio-16 00:01:17.556 [34/37] Linking target samples/server 00:01:17.556 [35/37] Linking target samples/null 00:01:17.556 [36/37] Linking target samples/lspci 00:01:17.556 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:17.556 INFO: autodetecting backend as ninja 00:01:17.556 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:17.816 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:18.077 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:18.077 ninja: no work to do. 00:01:24.678 The Meson build system 00:01:24.678 Version: 1.5.0 00:01:24.678 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:24.678 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:24.678 Build type: native build 00:01:24.678 Program cat found: YES (/usr/bin/cat) 00:01:24.678 Project name: DPDK 00:01:24.678 Project version: 24.03.0 00:01:24.678 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:24.678 C linker for the host machine: cc ld.bfd 2.40-14 00:01:24.678 Host machine cpu family: x86_64 00:01:24.678 Host machine cpu: x86_64 00:01:24.678 Message: ## Building in Developer Mode ## 00:01:24.678 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:24.678 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:24.678 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:24.678 Program python3 found: YES (/usr/bin/python3) 00:01:24.678 Program cat found: YES (/usr/bin/cat) 00:01:24.678 Compiler for C supports arguments -march=native: YES 00:01:24.678 Checking for size of "void *" : 8 00:01:24.678 Checking for size of "void *" : 8 (cached) 00:01:24.678 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:24.678 Library m found: YES 00:01:24.678 Library numa found: YES 00:01:24.678 Has header "numaif.h" : YES 00:01:24.678 Library fdt found: NO 00:01:24.678 Library execinfo found: NO 00:01:24.678 Has header "execinfo.h" : YES 00:01:24.678 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:24.678 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:24.678 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:24.678 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:24.678 Run-time dependency openssl found: YES 3.1.1 00:01:24.678 Run-time dependency libpcap found: YES 1.10.4 00:01:24.678 Has header "pcap.h" with dependency libpcap: YES 00:01:24.678 Compiler for C supports arguments -Wcast-qual: YES 00:01:24.678 Compiler for C supports arguments -Wdeprecated: YES 00:01:24.678 Compiler for C supports arguments -Wformat: YES 00:01:24.678 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:24.678 Compiler for C supports arguments -Wformat-security: NO 00:01:24.678 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.678 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:24.678 Compiler for C supports arguments -Wnested-externs: YES 00:01:24.678 Compiler for C supports arguments -Wold-style-definition: YES 00:01:24.678 Compiler for C supports arguments -Wpointer-arith: YES 00:01:24.678 Compiler for C supports arguments -Wsign-compare: YES 00:01:24.678 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:24.678 Compiler for C supports arguments -Wundef: YES 00:01:24.678 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.678 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:24.678 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:24.678 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.678 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:24.678 Program objdump found: YES (/usr/bin/objdump) 00:01:24.678 Compiler for C supports arguments -mavx512f: YES 00:01:24.678 Checking if "AVX512 checking" compiles: YES 00:01:24.678 Fetching value of define "__SSE4_2__" : 1 00:01:24.678 Fetching value of define "__AES__" : 1 00:01:24.678 Fetching value of define "__AVX__" : 1 00:01:24.678 Fetching value of define "__AVX2__" : 1 00:01:24.678 Fetching value of define "__AVX512BW__" : 1 00:01:24.678 Fetching value of define "__AVX512CD__" : 1 00:01:24.678 Fetching value of define "__AVX512DQ__" : 1 00:01:24.678 Fetching value of define "__AVX512F__" : 1 00:01:24.678 Fetching value of define "__AVX512VL__" : 1 00:01:24.678 Fetching value of define "__PCLMUL__" : 1 00:01:24.678 Fetching value of define "__RDRND__" : 1 00:01:24.678 Fetching value of define "__RDSEED__" : 1 00:01:24.678 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:24.678 Fetching value of define "__znver1__" : (undefined) 00:01:24.678 Fetching value of define "__znver2__" : (undefined) 00:01:24.678 Fetching value of define "__znver3__" : (undefined) 00:01:24.678 Fetching value of define "__znver4__" : (undefined) 00:01:24.678 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:24.678 Message: lib/log: Defining dependency "log" 00:01:24.678 Message: lib/kvargs: Defining dependency "kvargs" 00:01:24.678 Message: lib/telemetry: Defining dependency "telemetry" 00:01:24.678 Checking for function "getentropy" : NO 00:01:24.678 Message: lib/eal: Defining dependency "eal" 00:01:24.678 Message: lib/ring: Defining dependency "ring" 00:01:24.678 Message: lib/rcu: Defining dependency "rcu" 00:01:24.678 Message: lib/mempool: Defining dependency "mempool" 00:01:24.678 Message: lib/mbuf: Defining dependency "mbuf" 00:01:24.678 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:24.678 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:24.678 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:24.678 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:24.678 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:24.679 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:24.679 Compiler for C supports arguments -mpclmul: YES 00:01:24.679 Compiler for C supports arguments -maes: YES 00:01:24.679 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:24.679 Compiler for C supports arguments -mavx512bw: YES 00:01:24.679 Compiler for C supports arguments -mavx512dq: YES 00:01:24.679 Compiler for C supports arguments -mavx512vl: YES 00:01:24.679 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:24.679 Compiler for C supports arguments -mavx2: YES 00:01:24.679 Compiler for C supports arguments -mavx: YES 00:01:24.679 Message: lib/net: Defining dependency "net" 00:01:24.679 Message: lib/meter: Defining dependency "meter" 00:01:24.679 Message: lib/ethdev: Defining dependency "ethdev" 00:01:24.679 Message: lib/pci: Defining dependency "pci" 00:01:24.679 Message: lib/cmdline: Defining dependency "cmdline" 00:01:24.679 Message: lib/hash: Defining dependency "hash" 00:01:24.679 Message: lib/timer: Defining dependency "timer" 00:01:24.679 Message: lib/compressdev: Defining dependency "compressdev" 00:01:24.679 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:24.679 Message: lib/dmadev: Defining dependency "dmadev" 00:01:24.679 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:24.679 Message: lib/power: Defining dependency "power" 00:01:24.679 Message: lib/reorder: Defining dependency "reorder" 00:01:24.679 Message: lib/security: Defining dependency "security" 00:01:24.679 Has header "linux/userfaultfd.h" : YES 00:01:24.679 Has header "linux/vduse.h" : YES 00:01:24.679 Message: lib/vhost: Defining dependency "vhost" 00:01:24.679 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:24.679 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:24.679 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:24.679 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:24.679 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:24.679 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:24.679 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:24.679 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:24.679 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:24.679 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:24.679 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:24.679 Configuring doxy-api-html.conf using configuration 00:01:24.679 Configuring doxy-api-man.conf using configuration 00:01:24.679 Program mandb found: YES (/usr/bin/mandb) 00:01:24.679 Program sphinx-build found: NO 00:01:24.679 Configuring rte_build_config.h using configuration 00:01:24.679 Message: 00:01:24.679 ================= 00:01:24.679 Applications Enabled 00:01:24.679 ================= 00:01:24.679 00:01:24.679 apps: 00:01:24.679 00:01:24.679 00:01:24.679 Message: 00:01:24.679 ================= 00:01:24.679 Libraries Enabled 00:01:24.679 ================= 00:01:24.679 00:01:24.679 libs: 00:01:24.679 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:24.679 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:24.679 cryptodev, dmadev, power, reorder, security, vhost, 00:01:24.679 00:01:24.679 Message: 00:01:24.679 =============== 00:01:24.679 Drivers Enabled 00:01:24.679 =============== 00:01:24.679 00:01:24.679 common: 00:01:24.679 00:01:24.679 bus: 00:01:24.679 pci, vdev, 00:01:24.679 mempool: 00:01:24.679 ring, 00:01:24.679 dma: 00:01:24.679 00:01:24.679 net: 00:01:24.679 00:01:24.679 crypto: 00:01:24.679 00:01:24.679 compress: 00:01:24.679 00:01:24.679 vdpa: 00:01:24.679 00:01:24.679 00:01:24.679 Message: 00:01:24.679 ================= 00:01:24.679 Content Skipped 00:01:24.679 ================= 00:01:24.679 00:01:24.679 apps: 00:01:24.679 dumpcap: explicitly disabled via build config 00:01:24.679 graph: explicitly disabled via build config 00:01:24.679 pdump: explicitly disabled via build config 00:01:24.679 proc-info: explicitly disabled via build config 00:01:24.679 test-acl: explicitly disabled via build config 00:01:24.679 test-bbdev: explicitly disabled via build config 00:01:24.679 test-cmdline: explicitly disabled via build config 00:01:24.679 test-compress-perf: explicitly disabled via build config 00:01:24.679 test-crypto-perf: explicitly disabled via build config 00:01:24.679 test-dma-perf: explicitly disabled via build config 00:01:24.679 test-eventdev: explicitly disabled via build config 00:01:24.679 test-fib: explicitly disabled via build config 00:01:24.679 test-flow-perf: explicitly disabled via build config 00:01:24.679 test-gpudev: explicitly disabled via build config 00:01:24.679 test-mldev: explicitly disabled via build config 00:01:24.679 test-pipeline: explicitly disabled via build config 00:01:24.679 test-pmd: explicitly disabled via build config 00:01:24.679 test-regex: explicitly disabled via build config 00:01:24.679 test-sad: explicitly disabled via build config 00:01:24.679 test-security-perf: explicitly disabled via build config 00:01:24.679 00:01:24.679 libs: 00:01:24.679 argparse: explicitly disabled via build config 00:01:24.679 metrics: explicitly disabled via build config 00:01:24.679 acl: explicitly disabled via build config 00:01:24.679 bbdev: explicitly disabled via build config 00:01:24.679 bitratestats: explicitly disabled via build config 00:01:24.679 bpf: explicitly disabled via build config 00:01:24.679 cfgfile: explicitly disabled via build config 00:01:24.679 distributor: explicitly disabled via build config 00:01:24.679 efd: explicitly disabled via build config 00:01:24.679 eventdev: explicitly disabled via build config 00:01:24.679 dispatcher: explicitly disabled via build config 00:01:24.679 gpudev: explicitly disabled via build config 00:01:24.679 gro: explicitly disabled via build config 00:01:24.679 gso: explicitly disabled via build config 00:01:24.679 ip_frag: explicitly disabled via build config 00:01:24.679 jobstats: explicitly disabled via build config 00:01:24.679 latencystats: explicitly disabled via build config 00:01:24.679 lpm: explicitly disabled via build config 00:01:24.679 member: explicitly disabled via build config 00:01:24.679 pcapng: explicitly disabled via build config 00:01:24.679 rawdev: explicitly disabled via build config 00:01:24.679 regexdev: explicitly disabled via build config 00:01:24.679 mldev: explicitly disabled via build config 00:01:24.679 rib: explicitly disabled via build config 00:01:24.679 sched: explicitly disabled via build config 00:01:24.679 stack: explicitly disabled via build config 00:01:24.679 ipsec: explicitly disabled via build config 00:01:24.679 pdcp: explicitly disabled via build config 00:01:24.679 fib: explicitly disabled via build config 00:01:24.679 port: explicitly disabled via build config 00:01:24.679 pdump: explicitly disabled via build config 00:01:24.679 table: explicitly disabled via build config 00:01:24.679 pipeline: explicitly disabled via build config 00:01:24.679 graph: explicitly disabled via build config 00:01:24.679 node: explicitly disabled via build config 00:01:24.679 00:01:24.679 drivers: 00:01:24.679 common/cpt: not in enabled drivers build config 00:01:24.679 common/dpaax: not in enabled drivers build config 00:01:24.679 common/iavf: not in enabled drivers build config 00:01:24.679 common/idpf: not in enabled drivers build config 00:01:24.679 common/ionic: not in enabled drivers build config 00:01:24.679 common/mvep: not in enabled drivers build config 00:01:24.679 common/octeontx: not in enabled drivers build config 00:01:24.679 bus/auxiliary: not in enabled drivers build config 00:01:24.679 bus/cdx: not in enabled drivers build config 00:01:24.679 bus/dpaa: not in enabled drivers build config 00:01:24.679 bus/fslmc: not in enabled drivers build config 00:01:24.679 bus/ifpga: not in enabled drivers build config 00:01:24.679 bus/platform: not in enabled drivers build config 00:01:24.679 bus/uacce: not in enabled drivers build config 00:01:24.679 bus/vmbus: not in enabled drivers build config 00:01:24.679 common/cnxk: not in enabled drivers build config 00:01:24.679 common/mlx5: not in enabled drivers build config 00:01:24.679 common/nfp: not in enabled drivers build config 00:01:24.679 common/nitrox: not in enabled drivers build config 00:01:24.679 common/qat: not in enabled drivers build config 00:01:24.679 common/sfc_efx: not in enabled drivers build config 00:01:24.679 mempool/bucket: not in enabled drivers build config 00:01:24.679 mempool/cnxk: not in enabled drivers build config 00:01:24.679 mempool/dpaa: not in enabled drivers build config 00:01:24.679 mempool/dpaa2: not in enabled drivers build config 00:01:24.679 mempool/octeontx: not in enabled drivers build config 00:01:24.679 mempool/stack: not in enabled drivers build config 00:01:24.679 dma/cnxk: not in enabled drivers build config 00:01:24.679 dma/dpaa: not in enabled drivers build config 00:01:24.679 dma/dpaa2: not in enabled drivers build config 00:01:24.679 dma/hisilicon: not in enabled drivers build config 00:01:24.679 dma/idxd: not in enabled drivers build config 00:01:24.679 dma/ioat: not in enabled drivers build config 00:01:24.679 dma/skeleton: not in enabled drivers build config 00:01:24.679 net/af_packet: not in enabled drivers build config 00:01:24.679 net/af_xdp: not in enabled drivers build config 00:01:24.679 net/ark: not in enabled drivers build config 00:01:24.679 net/atlantic: not in enabled drivers build config 00:01:24.679 net/avp: not in enabled drivers build config 00:01:24.679 net/axgbe: not in enabled drivers build config 00:01:24.679 net/bnx2x: not in enabled drivers build config 00:01:24.679 net/bnxt: not in enabled drivers build config 00:01:24.679 net/bonding: not in enabled drivers build config 00:01:24.679 net/cnxk: not in enabled drivers build config 00:01:24.679 net/cpfl: not in enabled drivers build config 00:01:24.679 net/cxgbe: not in enabled drivers build config 00:01:24.679 net/dpaa: not in enabled drivers build config 00:01:24.679 net/dpaa2: not in enabled drivers build config 00:01:24.679 net/e1000: not in enabled drivers build config 00:01:24.679 net/ena: not in enabled drivers build config 00:01:24.679 net/enetc: not in enabled drivers build config 00:01:24.679 net/enetfec: not in enabled drivers build config 00:01:24.679 net/enic: not in enabled drivers build config 00:01:24.679 net/failsafe: not in enabled drivers build config 00:01:24.679 net/fm10k: not in enabled drivers build config 00:01:24.679 net/gve: not in enabled drivers build config 00:01:24.679 net/hinic: not in enabled drivers build config 00:01:24.679 net/hns3: not in enabled drivers build config 00:01:24.679 net/i40e: not in enabled drivers build config 00:01:24.679 net/iavf: not in enabled drivers build config 00:01:24.679 net/ice: not in enabled drivers build config 00:01:24.679 net/idpf: not in enabled drivers build config 00:01:24.679 net/igc: not in enabled drivers build config 00:01:24.679 net/ionic: not in enabled drivers build config 00:01:24.679 net/ipn3ke: not in enabled drivers build config 00:01:24.679 net/ixgbe: not in enabled drivers build config 00:01:24.680 net/mana: not in enabled drivers build config 00:01:24.680 net/memif: not in enabled drivers build config 00:01:24.680 net/mlx4: not in enabled drivers build config 00:01:24.680 net/mlx5: not in enabled drivers build config 00:01:24.680 net/mvneta: not in enabled drivers build config 00:01:24.680 net/mvpp2: not in enabled drivers build config 00:01:24.680 net/netvsc: not in enabled drivers build config 00:01:24.680 net/nfb: not in enabled drivers build config 00:01:24.680 net/nfp: not in enabled drivers build config 00:01:24.680 net/ngbe: not in enabled drivers build config 00:01:24.680 net/null: not in enabled drivers build config 00:01:24.680 net/octeontx: not in enabled drivers build config 00:01:24.680 net/octeon_ep: not in enabled drivers build config 00:01:24.680 net/pcap: not in enabled drivers build config 00:01:24.680 net/pfe: not in enabled drivers build config 00:01:24.680 net/qede: not in enabled drivers build config 00:01:24.680 net/ring: not in enabled drivers build config 00:01:24.680 net/sfc: not in enabled drivers build config 00:01:24.680 net/softnic: not in enabled drivers build config 00:01:24.680 net/tap: not in enabled drivers build config 00:01:24.680 net/thunderx: not in enabled drivers build config 00:01:24.680 net/txgbe: not in enabled drivers build config 00:01:24.680 net/vdev_netvsc: not in enabled drivers build config 00:01:24.680 net/vhost: not in enabled drivers build config 00:01:24.680 net/virtio: not in enabled drivers build config 00:01:24.680 net/vmxnet3: not in enabled drivers build config 00:01:24.680 raw/*: missing internal dependency, "rawdev" 00:01:24.680 crypto/armv8: not in enabled drivers build config 00:01:24.680 crypto/bcmfs: not in enabled drivers build config 00:01:24.680 crypto/caam_jr: not in enabled drivers build config 00:01:24.680 crypto/ccp: not in enabled drivers build config 00:01:24.680 crypto/cnxk: not in enabled drivers build config 00:01:24.680 crypto/dpaa_sec: not in enabled drivers build config 00:01:24.680 crypto/dpaa2_sec: not in enabled drivers build config 00:01:24.680 crypto/ipsec_mb: not in enabled drivers build config 00:01:24.680 crypto/mlx5: not in enabled drivers build config 00:01:24.680 crypto/mvsam: not in enabled drivers build config 00:01:24.680 crypto/nitrox: not in enabled drivers build config 00:01:24.680 crypto/null: not in enabled drivers build config 00:01:24.680 crypto/octeontx: not in enabled drivers build config 00:01:24.680 crypto/openssl: not in enabled drivers build config 00:01:24.680 crypto/scheduler: not in enabled drivers build config 00:01:24.680 crypto/uadk: not in enabled drivers build config 00:01:24.680 crypto/virtio: not in enabled drivers build config 00:01:24.680 compress/isal: not in enabled drivers build config 00:01:24.680 compress/mlx5: not in enabled drivers build config 00:01:24.680 compress/nitrox: not in enabled drivers build config 00:01:24.680 compress/octeontx: not in enabled drivers build config 00:01:24.680 compress/zlib: not in enabled drivers build config 00:01:24.680 regex/*: missing internal dependency, "regexdev" 00:01:24.680 ml/*: missing internal dependency, "mldev" 00:01:24.680 vdpa/ifc: not in enabled drivers build config 00:01:24.680 vdpa/mlx5: not in enabled drivers build config 00:01:24.680 vdpa/nfp: not in enabled drivers build config 00:01:24.680 vdpa/sfc: not in enabled drivers build config 00:01:24.680 event/*: missing internal dependency, "eventdev" 00:01:24.680 baseband/*: missing internal dependency, "bbdev" 00:01:24.680 gpu/*: missing internal dependency, "gpudev" 00:01:24.680 00:01:24.680 00:01:24.680 Build targets in project: 84 00:01:24.680 00:01:24.680 DPDK 24.03.0 00:01:24.680 00:01:24.680 User defined options 00:01:24.680 buildtype : debug 00:01:24.680 default_library : shared 00:01:24.680 libdir : lib 00:01:24.680 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:24.680 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:24.680 c_link_args : 00:01:24.680 cpu_instruction_set: native 00:01:24.680 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:24.680 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:24.680 enable_docs : false 00:01:24.680 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:24.680 enable_kmods : false 00:01:24.680 max_lcores : 128 00:01:24.680 tests : false 00:01:24.680 00:01:24.680 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.680 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:24.680 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:24.680 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:24.680 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:24.680 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:24.680 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:24.680 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:24.680 [7/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:24.680 [8/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:24.680 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:24.680 [10/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:24.680 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:24.680 [12/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:24.680 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:24.680 [14/267] Linking static target lib/librte_kvargs.a 00:01:24.680 [15/267] Linking static target lib/librte_log.a 00:01:24.680 [16/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:24.680 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:24.680 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:24.680 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:24.680 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:24.680 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:24.680 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.680 [23/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:24.680 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:24.680 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:24.680 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:24.941 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:24.941 [28/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:24.941 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:24.941 [30/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:24.941 [31/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:24.941 [32/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:24.941 [33/267] Linking static target lib/librte_pci.a 00:01:24.941 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:24.941 [35/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:24.941 [36/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:24.941 [37/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:24.941 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:24.941 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:24.941 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:24.941 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:24.941 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:24.941 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:24.942 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:24.942 [45/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:25.202 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:25.202 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:25.202 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:25.202 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:25.202 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:25.202 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:25.202 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:25.202 [53/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:25.202 [54/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:25.202 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:25.202 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:25.202 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:25.202 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:25.202 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:25.202 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:25.202 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:25.202 [62/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:25.202 [63/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.202 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:25.202 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:25.202 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:25.202 [67/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.202 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:25.202 [69/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:25.202 [70/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:25.202 [71/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:25.202 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:25.202 [73/267] Linking static target lib/librte_ring.a 00:01:25.202 [74/267] Linking static target lib/librte_timer.a 00:01:25.202 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:25.202 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:25.202 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:25.202 [78/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:25.202 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:25.202 [80/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:25.202 [81/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:25.202 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:25.202 [83/267] Linking static target lib/librte_telemetry.a 00:01:25.202 [84/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:25.202 [85/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:25.202 [86/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:25.202 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:25.202 [88/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:25.202 [89/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:25.202 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:25.202 [91/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:25.202 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:25.202 [93/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:25.202 [94/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:25.202 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:25.202 [96/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:25.202 [97/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:25.202 [98/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:25.202 [99/267] Linking static target lib/librte_meter.a 00:01:25.202 [100/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:25.202 [101/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:25.202 [102/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:25.202 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:25.202 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:25.202 [105/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:25.202 [106/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:25.202 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:25.202 [108/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:25.202 [109/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:25.202 [110/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:25.202 [111/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:25.202 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:25.202 [113/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:25.202 [114/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:25.202 [115/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:25.202 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:25.202 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:25.202 [118/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:25.202 [119/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:25.202 [120/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:25.202 [121/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:25.202 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:25.202 [123/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:25.202 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:25.202 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:25.202 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:25.202 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:25.202 [128/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:25.203 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:25.203 [130/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:25.203 [131/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:25.203 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:25.203 [133/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:25.203 [134/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:25.203 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:25.203 [136/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:25.203 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:25.203 [138/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:25.203 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.203 [140/267] Linking static target lib/librte_compressdev.a 00:01:25.203 [141/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:25.465 [142/267] Linking static target lib/librte_cmdline.a 00:01:25.465 [143/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:25.465 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:25.465 [145/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:25.465 [146/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:25.465 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:25.465 [148/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:25.465 [149/267] Linking static target lib/librte_rcu.a 00:01:25.465 [150/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.465 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:25.465 [152/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:25.465 [153/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:25.465 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:25.465 [155/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:25.465 [156/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:25.465 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:25.465 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:25.465 [159/267] Linking static target lib/librte_security.a 00:01:25.465 [160/267] Linking static target lib/librte_dmadev.a 00:01:25.465 [161/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:25.465 [162/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:25.465 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:25.465 [164/267] Linking target lib/librte_log.so.24.1 00:01:25.465 [165/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:25.465 [166/267] Linking static target lib/librte_mempool.a 00:01:25.465 [167/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:25.465 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:25.465 [169/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:25.465 [170/267] Linking static target lib/librte_power.a 00:01:25.465 [171/267] Linking static target lib/librte_net.a 00:01:25.465 [172/267] Linking static target lib/librte_eal.a 00:01:25.465 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:25.465 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:25.465 [175/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:25.465 [176/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:25.465 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:25.465 [178/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:25.465 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:25.465 [180/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:25.465 [181/267] Linking static target lib/librte_reorder.a 00:01:25.465 [182/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.465 [183/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.465 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:25.465 [185/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.465 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.465 [187/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:25.465 [188/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:25.465 [189/267] Linking static target drivers/librte_bus_vdev.a 00:01:25.465 [190/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:25.465 [191/267] Linking static target lib/librte_mbuf.a 00:01:25.465 [192/267] Linking target lib/librte_kvargs.so.24.1 00:01:25.727 [193/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.727 [194/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:25.727 [195/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:25.727 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.727 [197/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.727 [198/267] Linking static target lib/librte_hash.a 00:01:25.727 [199/267] Linking static target drivers/librte_bus_pci.a 00:01:25.727 [200/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:25.727 [201/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:25.727 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:25.727 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:25.727 [204/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:25.727 [205/267] Linking static target drivers/librte_mempool_ring.a 00:01:25.727 [206/267] Linking static target lib/librte_cryptodev.a 00:01:25.727 [207/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.727 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.727 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:25.727 [210/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.727 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:25.989 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.989 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:25.989 [214/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.989 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.989 [216/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:25.989 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.251 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:26.251 [219/267] Linking static target lib/librte_ethdev.a 00:01:26.251 [220/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.512 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.512 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.513 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.513 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.773 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.773 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.347 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:27.347 [228/267] Linking static target lib/librte_vhost.a 00:01:27.919 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.840 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.441 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.014 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.014 [233/267] Linking target lib/librte_eal.so.24.1 00:01:37.275 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:37.275 [235/267] Linking target lib/librte_ring.so.24.1 00:01:37.275 [236/267] Linking target lib/librte_timer.so.24.1 00:01:37.275 [237/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:37.275 [238/267] Linking target lib/librte_meter.so.24.1 00:01:37.275 [239/267] Linking target lib/librte_pci.so.24.1 00:01:37.275 [240/267] Linking target lib/librte_dmadev.so.24.1 00:01:37.275 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:37.275 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:37.275 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:37.275 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:37.535 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:37.535 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:37.535 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:37.535 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:37.535 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:37.535 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:37.535 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:37.535 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:37.797 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:37.797 [254/267] Linking target lib/librte_net.so.24.1 00:01:37.797 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:37.797 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:37.797 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:38.057 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:38.057 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:38.057 [260/267] Linking target lib/librte_hash.so.24.1 00:01:38.057 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:38.057 [262/267] Linking target lib/librte_security.so.24.1 00:01:38.057 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:38.057 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:38.057 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:38.319 [266/267] Linking target lib/librte_power.so.24.1 00:01:38.319 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:38.319 INFO: autodetecting backend as ninja 00:01:38.319 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:41.627 CC lib/log/log.o 00:01:41.627 CC lib/log/log_flags.o 00:01:41.627 CC lib/log/log_deprecated.o 00:01:41.627 CC lib/ut_mock/mock.o 00:01:41.627 CC lib/ut/ut.o 00:01:41.627 LIB libspdk_log.a 00:01:41.627 LIB libspdk_ut_mock.a 00:01:41.627 LIB libspdk_ut.a 00:01:41.627 SO libspdk_log.so.7.0 00:01:41.627 SO libspdk_ut_mock.so.6.0 00:01:41.627 SO libspdk_ut.so.2.0 00:01:41.627 SYMLINK libspdk_log.so 00:01:41.627 SYMLINK libspdk_ut_mock.so 00:01:41.627 SYMLINK libspdk_ut.so 00:01:41.627 CC lib/util/base64.o 00:01:41.627 CC lib/ioat/ioat.o 00:01:41.627 CC lib/util/bit_array.o 00:01:41.627 CXX lib/trace_parser/trace.o 00:01:41.627 CC lib/dma/dma.o 00:01:41.627 CC lib/util/cpuset.o 00:01:41.627 CC lib/util/crc16.o 00:01:41.627 CC lib/util/crc32.o 00:01:41.627 CC lib/util/crc32c.o 00:01:41.627 CC lib/util/crc32_ieee.o 00:01:41.627 CC lib/util/crc64.o 00:01:41.627 CC lib/util/dif.o 00:01:41.627 CC lib/util/fd.o 00:01:41.627 CC lib/util/fd_group.o 00:01:41.627 CC lib/util/file.o 00:01:41.627 CC lib/util/iov.o 00:01:41.627 CC lib/util/hexlify.o 00:01:41.627 CC lib/util/math.o 00:01:41.627 CC lib/util/net.o 00:01:41.627 CC lib/util/pipe.o 00:01:41.627 CC lib/util/strerror_tls.o 00:01:41.627 CC lib/util/string.o 00:01:41.627 CC lib/util/uuid.o 00:01:41.627 CC lib/util/xor.o 00:01:41.627 CC lib/util/zipf.o 00:01:41.627 CC lib/util/md5.o 00:01:41.889 CC lib/vfio_user/host/vfio_user_pci.o 00:01:41.889 CC lib/vfio_user/host/vfio_user.o 00:01:41.889 LIB libspdk_dma.a 00:01:41.889 SO libspdk_dma.so.5.0 00:01:41.889 LIB libspdk_ioat.a 00:01:42.151 SO libspdk_ioat.so.7.0 00:01:42.151 SYMLINK libspdk_dma.so 00:01:42.151 SYMLINK libspdk_ioat.so 00:01:42.151 LIB libspdk_vfio_user.a 00:01:42.151 SO libspdk_vfio_user.so.5.0 00:01:42.151 LIB libspdk_util.a 00:01:42.151 SYMLINK libspdk_vfio_user.so 00:01:42.151 SO libspdk_util.so.10.1 00:01:42.413 SYMLINK libspdk_util.so 00:01:42.413 LIB libspdk_trace_parser.a 00:01:42.676 SO libspdk_trace_parser.so.6.0 00:01:42.676 SYMLINK libspdk_trace_parser.so 00:01:42.676 CC lib/conf/conf.o 00:01:42.676 CC lib/vmd/vmd.o 00:01:42.676 CC lib/json/json_parse.o 00:01:42.676 CC lib/vmd/led.o 00:01:42.676 CC lib/json/json_util.o 00:01:42.676 CC lib/idxd/idxd.o 00:01:42.676 CC lib/json/json_write.o 00:01:42.676 CC lib/idxd/idxd_user.o 00:01:42.676 CC lib/idxd/idxd_kernel.o 00:01:42.676 CC lib/rdma_provider/common.o 00:01:42.676 CC lib/env_dpdk/env.o 00:01:42.676 CC lib/rdma_utils/rdma_utils.o 00:01:42.676 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:42.676 CC lib/env_dpdk/memory.o 00:01:42.676 CC lib/env_dpdk/pci.o 00:01:42.676 CC lib/env_dpdk/init.o 00:01:42.676 CC lib/env_dpdk/threads.o 00:01:42.676 CC lib/env_dpdk/pci_ioat.o 00:01:42.676 CC lib/env_dpdk/pci_virtio.o 00:01:42.676 CC lib/env_dpdk/pci_vmd.o 00:01:42.676 CC lib/env_dpdk/pci_idxd.o 00:01:42.676 CC lib/env_dpdk/pci_event.o 00:01:42.676 CC lib/env_dpdk/sigbus_handler.o 00:01:42.676 CC lib/env_dpdk/pci_dpdk.o 00:01:42.676 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:42.676 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:42.937 LIB libspdk_rdma_provider.a 00:01:42.937 LIB libspdk_conf.a 00:01:42.937 SO libspdk_rdma_provider.so.6.0 00:01:42.937 SO libspdk_conf.so.6.0 00:01:43.199 LIB libspdk_json.a 00:01:43.199 LIB libspdk_rdma_utils.a 00:01:43.199 SYMLINK libspdk_rdma_provider.so 00:01:43.199 SO libspdk_json.so.6.0 00:01:43.199 SO libspdk_rdma_utils.so.1.0 00:01:43.199 SYMLINK libspdk_conf.so 00:01:43.199 SYMLINK libspdk_json.so 00:01:43.199 SYMLINK libspdk_rdma_utils.so 00:01:43.461 LIB libspdk_idxd.a 00:01:43.461 LIB libspdk_vmd.a 00:01:43.461 SO libspdk_idxd.so.12.1 00:01:43.461 SO libspdk_vmd.so.6.0 00:01:43.461 SYMLINK libspdk_idxd.so 00:01:43.461 SYMLINK libspdk_vmd.so 00:01:43.461 CC lib/jsonrpc/jsonrpc_server.o 00:01:43.461 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:43.461 CC lib/jsonrpc/jsonrpc_client.o 00:01:43.461 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:43.723 LIB libspdk_jsonrpc.a 00:01:43.723 SO libspdk_jsonrpc.so.6.0 00:01:43.984 SYMLINK libspdk_jsonrpc.so 00:01:43.984 LIB libspdk_env_dpdk.a 00:01:43.984 SO libspdk_env_dpdk.so.15.1 00:01:44.246 SYMLINK libspdk_env_dpdk.so 00:01:44.246 CC lib/rpc/rpc.o 00:01:44.507 LIB libspdk_rpc.a 00:01:44.507 SO libspdk_rpc.so.6.0 00:01:44.507 SYMLINK libspdk_rpc.so 00:01:45.080 CC lib/notify/notify.o 00:01:45.080 CC lib/notify/notify_rpc.o 00:01:45.080 CC lib/keyring/keyring.o 00:01:45.080 CC lib/keyring/keyring_rpc.o 00:01:45.081 CC lib/trace/trace.o 00:01:45.081 CC lib/trace/trace_flags.o 00:01:45.081 CC lib/trace/trace_rpc.o 00:01:45.081 LIB libspdk_notify.a 00:01:45.081 SO libspdk_notify.so.6.0 00:01:45.342 LIB libspdk_keyring.a 00:01:45.342 LIB libspdk_trace.a 00:01:45.342 SO libspdk_keyring.so.2.0 00:01:45.342 SYMLINK libspdk_notify.so 00:01:45.342 SO libspdk_trace.so.11.0 00:01:45.342 SYMLINK libspdk_keyring.so 00:01:45.342 SYMLINK libspdk_trace.so 00:01:45.603 CC lib/thread/thread.o 00:01:45.603 CC lib/thread/iobuf.o 00:01:45.603 CC lib/sock/sock.o 00:01:45.603 CC lib/sock/sock_rpc.o 00:01:46.177 LIB libspdk_sock.a 00:01:46.177 SO libspdk_sock.so.10.0 00:01:46.177 SYMLINK libspdk_sock.so 00:01:46.439 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:46.439 CC lib/nvme/nvme_ctrlr.o 00:01:46.439 CC lib/nvme/nvme_fabric.o 00:01:46.439 CC lib/nvme/nvme_ns_cmd.o 00:01:46.439 CC lib/nvme/nvme_ns.o 00:01:46.439 CC lib/nvme/nvme_pcie_common.o 00:01:46.439 CC lib/nvme/nvme_pcie.o 00:01:46.439 CC lib/nvme/nvme_qpair.o 00:01:46.439 CC lib/nvme/nvme.o 00:01:46.439 CC lib/nvme/nvme_quirks.o 00:01:46.439 CC lib/nvme/nvme_transport.o 00:01:46.439 CC lib/nvme/nvme_discovery.o 00:01:46.439 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:46.439 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:46.439 CC lib/nvme/nvme_tcp.o 00:01:46.439 CC lib/nvme/nvme_opal.o 00:01:46.439 CC lib/nvme/nvme_io_msg.o 00:01:46.439 CC lib/nvme/nvme_poll_group.o 00:01:46.700 CC lib/nvme/nvme_zns.o 00:01:46.700 CC lib/nvme/nvme_stubs.o 00:01:46.700 CC lib/nvme/nvme_auth.o 00:01:46.700 CC lib/nvme/nvme_cuse.o 00:01:46.700 CC lib/nvme/nvme_vfio_user.o 00:01:46.700 CC lib/nvme/nvme_rdma.o 00:01:46.960 LIB libspdk_thread.a 00:01:46.960 SO libspdk_thread.so.10.2 00:01:47.221 SYMLINK libspdk_thread.so 00:01:47.489 CC lib/fsdev/fsdev.o 00:01:47.489 CC lib/fsdev/fsdev_rpc.o 00:01:47.489 CC lib/fsdev/fsdev_io.o 00:01:47.489 CC lib/blob/blobstore.o 00:01:47.489 CC lib/blob/request.o 00:01:47.489 CC lib/blob/zeroes.o 00:01:47.489 CC lib/blob/blob_bs_dev.o 00:01:47.489 CC lib/init/json_config.o 00:01:47.489 CC lib/accel/accel.o 00:01:47.489 CC lib/init/subsystem.o 00:01:47.489 CC lib/virtio/virtio.o 00:01:47.489 CC lib/init/subsystem_rpc.o 00:01:47.489 CC lib/accel/accel_rpc.o 00:01:47.489 CC lib/init/rpc.o 00:01:47.489 CC lib/virtio/virtio_vhost_user.o 00:01:47.489 CC lib/accel/accel_sw.o 00:01:47.489 CC lib/virtio/virtio_vfio_user.o 00:01:47.489 CC lib/virtio/virtio_pci.o 00:01:47.489 CC lib/vfu_tgt/tgt_endpoint.o 00:01:47.489 CC lib/vfu_tgt/tgt_rpc.o 00:01:47.757 LIB libspdk_init.a 00:01:47.757 SO libspdk_init.so.6.0 00:01:47.757 LIB libspdk_virtio.a 00:01:47.757 LIB libspdk_vfu_tgt.a 00:01:47.757 SYMLINK libspdk_init.so 00:01:47.757 SO libspdk_virtio.so.7.0 00:01:47.757 SO libspdk_vfu_tgt.so.3.0 00:01:48.020 SYMLINK libspdk_vfu_tgt.so 00:01:48.020 SYMLINK libspdk_virtio.so 00:01:48.020 LIB libspdk_fsdev.a 00:01:48.020 SO libspdk_fsdev.so.1.0 00:01:48.281 CC lib/event/app.o 00:01:48.281 CC lib/event/reactor.o 00:01:48.281 CC lib/event/log_rpc.o 00:01:48.281 CC lib/event/app_rpc.o 00:01:48.281 CC lib/event/scheduler_static.o 00:01:48.281 SYMLINK libspdk_fsdev.so 00:01:48.543 LIB libspdk_accel.a 00:01:48.543 LIB libspdk_nvme.a 00:01:48.543 SO libspdk_accel.so.16.0 00:01:48.543 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:48.543 SYMLINK libspdk_accel.so 00:01:48.543 LIB libspdk_event.a 00:01:48.543 SO libspdk_nvme.so.15.0 00:01:48.543 SO libspdk_event.so.15.0 00:01:48.804 SYMLINK libspdk_event.so 00:01:48.804 SYMLINK libspdk_nvme.so 00:01:48.804 CC lib/bdev/bdev.o 00:01:49.066 CC lib/bdev/bdev_rpc.o 00:01:49.066 CC lib/bdev/bdev_zone.o 00:01:49.066 CC lib/bdev/part.o 00:01:49.066 CC lib/bdev/scsi_nvme.o 00:01:49.066 LIB libspdk_fuse_dispatcher.a 00:01:49.066 SO libspdk_fuse_dispatcher.so.1.0 00:01:49.327 SYMLINK libspdk_fuse_dispatcher.so 00:01:50.272 LIB libspdk_blob.a 00:01:50.272 SO libspdk_blob.so.11.0 00:01:50.272 SYMLINK libspdk_blob.so 00:01:50.533 CC lib/lvol/lvol.o 00:01:50.533 CC lib/blobfs/blobfs.o 00:01:50.533 CC lib/blobfs/tree.o 00:01:51.477 LIB libspdk_bdev.a 00:01:51.477 SO libspdk_bdev.so.17.0 00:01:51.477 LIB libspdk_blobfs.a 00:01:51.477 SO libspdk_blobfs.so.10.0 00:01:51.477 SYMLINK libspdk_bdev.so 00:01:51.477 LIB libspdk_lvol.a 00:01:51.477 SO libspdk_lvol.so.10.0 00:01:51.477 SYMLINK libspdk_blobfs.so 00:01:51.477 SYMLINK libspdk_lvol.so 00:01:51.739 CC lib/nbd/nbd.o 00:01:51.739 CC lib/nvmf/ctrlr.o 00:01:51.739 CC lib/nbd/nbd_rpc.o 00:01:51.739 CC lib/nvmf/ctrlr_discovery.o 00:01:51.739 CC lib/nvmf/ctrlr_bdev.o 00:01:51.739 CC lib/ublk/ublk.o 00:01:51.739 CC lib/scsi/dev.o 00:01:51.739 CC lib/nvmf/subsystem.o 00:01:51.739 CC lib/scsi/lun.o 00:01:51.739 CC lib/ublk/ublk_rpc.o 00:01:51.739 CC lib/nvmf/nvmf.o 00:01:51.739 CC lib/ftl/ftl_core.o 00:01:51.739 CC lib/nvmf/nvmf_rpc.o 00:01:51.739 CC lib/scsi/port.o 00:01:51.739 CC lib/ftl/ftl_init.o 00:01:51.739 CC lib/nvmf/transport.o 00:01:51.739 CC lib/scsi/scsi.o 00:01:51.739 CC lib/ftl/ftl_layout.o 00:01:51.739 CC lib/nvmf/tcp.o 00:01:51.739 CC lib/ftl/ftl_debug.o 00:01:51.739 CC lib/scsi/scsi_bdev.o 00:01:51.739 CC lib/nvmf/stubs.o 00:01:51.739 CC lib/nvmf/mdns_server.o 00:01:51.739 CC lib/scsi/scsi_pr.o 00:01:51.739 CC lib/nvmf/vfio_user.o 00:01:51.739 CC lib/ftl/ftl_io.o 00:01:51.739 CC lib/nvmf/rdma.o 00:01:51.739 CC lib/scsi/scsi_rpc.o 00:01:51.739 CC lib/ftl/ftl_sb.o 00:01:51.739 CC lib/nvmf/auth.o 00:01:51.739 CC lib/scsi/task.o 00:01:51.739 CC lib/ftl/ftl_l2p.o 00:01:51.739 CC lib/ftl/ftl_l2p_flat.o 00:01:51.739 CC lib/ftl/ftl_nv_cache.o 00:01:51.739 CC lib/ftl/ftl_band.o 00:01:51.739 CC lib/ftl/ftl_band_ops.o 00:01:51.739 CC lib/ftl/ftl_writer.o 00:01:51.739 CC lib/ftl/ftl_rq.o 00:01:51.739 CC lib/ftl/ftl_reloc.o 00:01:51.739 CC lib/ftl/ftl_l2p_cache.o 00:01:51.739 CC lib/ftl/ftl_p2l_log.o 00:01:51.739 CC lib/ftl/ftl_p2l.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:51.739 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:51.739 CC lib/ftl/utils/ftl_conf.o 00:01:51.739 CC lib/ftl/utils/ftl_mempool.o 00:01:51.739 CC lib/ftl/utils/ftl_md.o 00:01:51.739 CC lib/ftl/utils/ftl_bitmap.o 00:01:51.739 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:51.739 CC lib/ftl/utils/ftl_property.o 00:01:51.739 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:51.739 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:51.739 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:51.739 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:51.739 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:51.739 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:51.739 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:51.739 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:51.739 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:51.739 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:51.739 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:01:51.739 CC lib/ftl/base/ftl_base_dev.o 00:01:51.739 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:01:52.000 CC lib/ftl/base/ftl_base_bdev.o 00:01:52.000 CC lib/ftl/ftl_trace.o 00:01:52.572 LIB libspdk_nbd.a 00:01:52.572 SO libspdk_nbd.so.7.0 00:01:52.572 LIB libspdk_scsi.a 00:01:52.833 SYMLINK libspdk_nbd.so 00:01:52.833 SO libspdk_scsi.so.9.0 00:01:52.833 SYMLINK libspdk_scsi.so 00:01:52.833 LIB libspdk_ublk.a 00:01:52.833 SO libspdk_ublk.so.3.0 00:01:52.833 SYMLINK libspdk_ublk.so 00:01:53.095 LIB libspdk_ftl.a 00:01:53.095 SO libspdk_ftl.so.9.0 00:01:53.095 CC lib/vhost/vhost.o 00:01:53.095 CC lib/vhost/vhost_rpc.o 00:01:53.095 CC lib/vhost/vhost_scsi.o 00:01:53.095 CC lib/vhost/vhost_blk.o 00:01:53.095 CC lib/vhost/rte_vhost_user.o 00:01:53.095 CC lib/iscsi/conn.o 00:01:53.095 CC lib/iscsi/init_grp.o 00:01:53.095 CC lib/iscsi/iscsi.o 00:01:53.095 CC lib/iscsi/param.o 00:01:53.095 CC lib/iscsi/portal_grp.o 00:01:53.095 CC lib/iscsi/tgt_node.o 00:01:53.095 CC lib/iscsi/iscsi_subsystem.o 00:01:53.095 CC lib/iscsi/iscsi_rpc.o 00:01:53.095 CC lib/iscsi/task.o 00:01:53.357 SYMLINK libspdk_ftl.so 00:01:53.931 LIB libspdk_nvmf.a 00:01:53.931 SO libspdk_nvmf.so.19.0 00:01:54.193 LIB libspdk_vhost.a 00:01:54.193 SO libspdk_vhost.so.8.0 00:01:54.193 SYMLINK libspdk_nvmf.so 00:01:54.455 SYMLINK libspdk_vhost.so 00:01:54.455 LIB libspdk_iscsi.a 00:01:54.455 SO libspdk_iscsi.so.8.0 00:01:54.717 SYMLINK libspdk_iscsi.so 00:01:55.290 CC module/env_dpdk/env_dpdk_rpc.o 00:01:55.290 CC module/vfu_device/vfu_virtio.o 00:01:55.290 CC module/vfu_device/vfu_virtio_blk.o 00:01:55.290 CC module/vfu_device/vfu_virtio_scsi.o 00:01:55.290 CC module/vfu_device/vfu_virtio_rpc.o 00:01:55.290 CC module/vfu_device/vfu_virtio_fs.o 00:01:55.290 LIB libspdk_env_dpdk_rpc.a 00:01:55.290 CC module/accel/ioat/accel_ioat.o 00:01:55.290 CC module/accel/ioat/accel_ioat_rpc.o 00:01:55.290 CC module/accel/error/accel_error.o 00:01:55.290 CC module/accel/error/accel_error_rpc.o 00:01:55.290 CC module/sock/posix/posix.o 00:01:55.290 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:55.290 CC module/scheduler/gscheduler/gscheduler.o 00:01:55.290 CC module/blob/bdev/blob_bdev.o 00:01:55.290 CC module/keyring/file/keyring.o 00:01:55.290 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:55.290 CC module/keyring/file/keyring_rpc.o 00:01:55.290 CC module/accel/iaa/accel_iaa.o 00:01:55.290 CC module/accel/iaa/accel_iaa_rpc.o 00:01:55.290 CC module/accel/dsa/accel_dsa.o 00:01:55.290 CC module/accel/dsa/accel_dsa_rpc.o 00:01:55.290 CC module/keyring/linux/keyring.o 00:01:55.290 CC module/keyring/linux/keyring_rpc.o 00:01:55.290 CC module/fsdev/aio/fsdev_aio.o 00:01:55.290 CC module/fsdev/aio/fsdev_aio_rpc.o 00:01:55.290 CC module/fsdev/aio/linux_aio_mgr.o 00:01:55.290 SO libspdk_env_dpdk_rpc.so.6.0 00:01:55.551 SYMLINK libspdk_env_dpdk_rpc.so 00:01:55.551 LIB libspdk_scheduler_gscheduler.a 00:01:55.551 LIB libspdk_scheduler_dpdk_governor.a 00:01:55.551 LIB libspdk_accel_error.a 00:01:55.551 LIB libspdk_keyring_file.a 00:01:55.551 LIB libspdk_keyring_linux.a 00:01:55.551 LIB libspdk_accel_ioat.a 00:01:55.551 SO libspdk_scheduler_gscheduler.so.4.0 00:01:55.551 LIB libspdk_scheduler_dynamic.a 00:01:55.551 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:55.551 SO libspdk_keyring_file.so.2.0 00:01:55.551 SO libspdk_accel_error.so.2.0 00:01:55.551 SO libspdk_keyring_linux.so.1.0 00:01:55.551 SO libspdk_accel_ioat.so.6.0 00:01:55.551 LIB libspdk_accel_iaa.a 00:01:55.551 SO libspdk_scheduler_dynamic.so.4.0 00:01:55.551 SO libspdk_accel_iaa.so.3.0 00:01:55.814 SYMLINK libspdk_scheduler_gscheduler.so 00:01:55.814 SYMLINK libspdk_keyring_file.so 00:01:55.814 LIB libspdk_blob_bdev.a 00:01:55.814 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:55.814 SYMLINK libspdk_accel_error.so 00:01:55.814 LIB libspdk_accel_dsa.a 00:01:55.814 SYMLINK libspdk_accel_ioat.so 00:01:55.814 SYMLINK libspdk_keyring_linux.so 00:01:55.814 SO libspdk_blob_bdev.so.11.0 00:01:55.814 SYMLINK libspdk_scheduler_dynamic.so 00:01:55.814 SO libspdk_accel_dsa.so.5.0 00:01:55.814 SYMLINK libspdk_accel_iaa.so 00:01:55.814 LIB libspdk_vfu_device.a 00:01:55.814 SYMLINK libspdk_blob_bdev.so 00:01:55.814 SYMLINK libspdk_accel_dsa.so 00:01:55.814 SO libspdk_vfu_device.so.3.0 00:01:55.814 SYMLINK libspdk_vfu_device.so 00:01:56.077 LIB libspdk_fsdev_aio.a 00:01:56.077 LIB libspdk_sock_posix.a 00:01:56.077 SO libspdk_fsdev_aio.so.1.0 00:01:56.077 SO libspdk_sock_posix.so.6.0 00:01:56.077 SYMLINK libspdk_fsdev_aio.so 00:01:56.339 SYMLINK libspdk_sock_posix.so 00:01:56.339 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:56.339 CC module/blobfs/bdev/blobfs_bdev.o 00:01:56.339 CC module/bdev/error/vbdev_error.o 00:01:56.339 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:56.339 CC module/bdev/delay/vbdev_delay.o 00:01:56.339 CC module/bdev/error/vbdev_error_rpc.o 00:01:56.339 CC module/bdev/lvol/vbdev_lvol.o 00:01:56.339 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:56.339 CC module/bdev/gpt/gpt.o 00:01:56.339 CC module/bdev/null/bdev_null.o 00:01:56.339 CC module/bdev/null/bdev_null_rpc.o 00:01:56.339 CC module/bdev/gpt/vbdev_gpt.o 00:01:56.339 CC module/bdev/malloc/bdev_malloc.o 00:01:56.339 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:56.339 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:56.339 CC module/bdev/iscsi/bdev_iscsi.o 00:01:56.339 CC module/bdev/aio/bdev_aio.o 00:01:56.339 CC module/bdev/split/vbdev_split.o 00:01:56.339 CC module/bdev/raid/bdev_raid.o 00:01:56.339 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:56.339 CC module/bdev/aio/bdev_aio_rpc.o 00:01:56.339 CC module/bdev/passthru/vbdev_passthru.o 00:01:56.339 CC module/bdev/split/vbdev_split_rpc.o 00:01:56.339 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:56.339 CC module/bdev/raid/bdev_raid_rpc.o 00:01:56.339 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:56.339 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:56.339 CC module/bdev/raid/bdev_raid_sb.o 00:01:56.339 CC module/bdev/nvme/bdev_nvme.o 00:01:56.339 CC module/bdev/raid/raid1.o 00:01:56.339 CC module/bdev/raid/raid0.o 00:01:56.339 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:56.339 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:56.339 CC module/bdev/raid/concat.o 00:01:56.339 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:56.339 CC module/bdev/nvme/nvme_rpc.o 00:01:56.339 CC module/bdev/nvme/bdev_mdns_client.o 00:01:56.339 CC module/bdev/ftl/bdev_ftl.o 00:01:56.339 CC module/bdev/nvme/vbdev_opal.o 00:01:56.339 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:56.339 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:56.339 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:56.600 LIB libspdk_blobfs_bdev.a 00:01:56.862 LIB libspdk_bdev_split.a 00:01:56.862 SO libspdk_blobfs_bdev.so.6.0 00:01:56.862 LIB libspdk_bdev_null.a 00:01:56.862 SO libspdk_bdev_split.so.6.0 00:01:56.862 LIB libspdk_bdev_error.a 00:01:56.862 LIB libspdk_bdev_passthru.a 00:01:56.862 LIB libspdk_bdev_gpt.a 00:01:56.862 SO libspdk_bdev_null.so.6.0 00:01:56.862 SYMLINK libspdk_blobfs_bdev.so 00:01:56.862 LIB libspdk_bdev_ftl.a 00:01:56.862 SO libspdk_bdev_error.so.6.0 00:01:56.862 SO libspdk_bdev_passthru.so.6.0 00:01:56.862 SO libspdk_bdev_gpt.so.6.0 00:01:56.862 LIB libspdk_bdev_aio.a 00:01:56.862 SYMLINK libspdk_bdev_split.so 00:01:56.862 LIB libspdk_bdev_iscsi.a 00:01:56.862 SO libspdk_bdev_ftl.so.6.0 00:01:56.862 LIB libspdk_bdev_zone_block.a 00:01:56.862 LIB libspdk_bdev_malloc.a 00:01:56.862 SYMLINK libspdk_bdev_null.so 00:01:56.862 LIB libspdk_bdev_delay.a 00:01:56.862 SO libspdk_bdev_aio.so.6.0 00:01:56.862 SO libspdk_bdev_iscsi.so.6.0 00:01:56.862 SYMLINK libspdk_bdev_error.so 00:01:56.862 SYMLINK libspdk_bdev_passthru.so 00:01:56.862 SO libspdk_bdev_zone_block.so.6.0 00:01:56.862 SYMLINK libspdk_bdev_gpt.so 00:01:56.862 SO libspdk_bdev_malloc.so.6.0 00:01:56.862 SO libspdk_bdev_delay.so.6.0 00:01:56.862 SYMLINK libspdk_bdev_ftl.so 00:01:56.862 LIB libspdk_bdev_lvol.a 00:01:56.862 SYMLINK libspdk_bdev_aio.so 00:01:56.862 SYMLINK libspdk_bdev_iscsi.so 00:01:57.124 SO libspdk_bdev_lvol.so.6.0 00:01:57.124 SYMLINK libspdk_bdev_zone_block.so 00:01:57.124 SYMLINK libspdk_bdev_malloc.so 00:01:57.124 LIB libspdk_bdev_virtio.a 00:01:57.124 SYMLINK libspdk_bdev_delay.so 00:01:57.124 SO libspdk_bdev_virtio.so.6.0 00:01:57.124 SYMLINK libspdk_bdev_lvol.so 00:01:57.124 SYMLINK libspdk_bdev_virtio.so 00:01:57.385 LIB libspdk_bdev_raid.a 00:01:57.385 SO libspdk_bdev_raid.so.6.0 00:01:57.647 SYMLINK libspdk_bdev_raid.so 00:01:58.593 LIB libspdk_bdev_nvme.a 00:01:58.593 SO libspdk_bdev_nvme.so.7.0 00:01:58.593 SYMLINK libspdk_bdev_nvme.so 00:01:59.536 CC module/event/subsystems/iobuf/iobuf.o 00:01:59.536 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:59.536 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:59.536 CC module/event/subsystems/vmd/vmd.o 00:01:59.536 CC module/event/subsystems/sock/sock.o 00:01:59.536 CC module/event/subsystems/scheduler/scheduler.o 00:01:59.536 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:59.536 CC module/event/subsystems/keyring/keyring.o 00:01:59.536 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:59.536 CC module/event/subsystems/fsdev/fsdev.o 00:01:59.536 LIB libspdk_event_vhost_blk.a 00:01:59.536 LIB libspdk_event_fsdev.a 00:01:59.536 LIB libspdk_event_vfu_tgt.a 00:01:59.536 LIB libspdk_event_keyring.a 00:01:59.536 LIB libspdk_event_sock.a 00:01:59.536 LIB libspdk_event_scheduler.a 00:01:59.536 LIB libspdk_event_vmd.a 00:01:59.536 LIB libspdk_event_iobuf.a 00:01:59.536 SO libspdk_event_vhost_blk.so.3.0 00:01:59.536 SO libspdk_event_fsdev.so.1.0 00:01:59.536 SO libspdk_event_vfu_tgt.so.3.0 00:01:59.536 SO libspdk_event_keyring.so.1.0 00:01:59.536 SO libspdk_event_sock.so.5.0 00:01:59.536 SO libspdk_event_scheduler.so.4.0 00:01:59.536 SO libspdk_event_vmd.so.6.0 00:01:59.536 SO libspdk_event_iobuf.so.3.0 00:01:59.798 SYMLINK libspdk_event_vhost_blk.so 00:01:59.798 SYMLINK libspdk_event_fsdev.so 00:01:59.798 SYMLINK libspdk_event_vfu_tgt.so 00:01:59.798 SYMLINK libspdk_event_keyring.so 00:01:59.798 SYMLINK libspdk_event_scheduler.so 00:01:59.798 SYMLINK libspdk_event_sock.so 00:01:59.798 SYMLINK libspdk_event_vmd.so 00:01:59.798 SYMLINK libspdk_event_iobuf.so 00:02:00.060 CC module/event/subsystems/accel/accel.o 00:02:00.322 LIB libspdk_event_accel.a 00:02:00.322 SO libspdk_event_accel.so.6.0 00:02:00.322 SYMLINK libspdk_event_accel.so 00:02:00.584 CC module/event/subsystems/bdev/bdev.o 00:02:00.846 LIB libspdk_event_bdev.a 00:02:00.846 SO libspdk_event_bdev.so.6.0 00:02:01.108 SYMLINK libspdk_event_bdev.so 00:02:01.370 CC module/event/subsystems/scsi/scsi.o 00:02:01.370 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:01.370 CC module/event/subsystems/nbd/nbd.o 00:02:01.370 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:01.370 CC module/event/subsystems/ublk/ublk.o 00:02:01.632 LIB libspdk_event_ublk.a 00:02:01.632 LIB libspdk_event_scsi.a 00:02:01.632 LIB libspdk_event_nbd.a 00:02:01.632 SO libspdk_event_nbd.so.6.0 00:02:01.632 SO libspdk_event_ublk.so.3.0 00:02:01.632 SO libspdk_event_scsi.so.6.0 00:02:01.632 LIB libspdk_event_nvmf.a 00:02:01.632 SYMLINK libspdk_event_nbd.so 00:02:01.632 SYMLINK libspdk_event_ublk.so 00:02:01.632 SYMLINK libspdk_event_scsi.so 00:02:01.632 SO libspdk_event_nvmf.so.6.0 00:02:01.632 SYMLINK libspdk_event_nvmf.so 00:02:01.894 CC module/event/subsystems/iscsi/iscsi.o 00:02:01.894 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:02.155 LIB libspdk_event_vhost_scsi.a 00:02:02.155 LIB libspdk_event_iscsi.a 00:02:02.155 SO libspdk_event_vhost_scsi.so.3.0 00:02:02.155 SO libspdk_event_iscsi.so.6.0 00:02:02.155 SYMLINK libspdk_event_vhost_scsi.so 00:02:02.417 SYMLINK libspdk_event_iscsi.so 00:02:02.417 SO libspdk.so.6.0 00:02:02.417 SYMLINK libspdk.so 00:02:02.991 CXX app/trace/trace.o 00:02:02.991 CC app/spdk_lspci/spdk_lspci.o 00:02:02.991 CC app/spdk_top/spdk_top.o 00:02:02.991 TEST_HEADER include/spdk/accel.h 00:02:02.991 CC app/spdk_nvme_identify/identify.o 00:02:02.991 TEST_HEADER include/spdk/accel_module.h 00:02:02.991 TEST_HEADER include/spdk/assert.h 00:02:02.991 CC app/spdk_nvme_perf/perf.o 00:02:02.991 TEST_HEADER include/spdk/barrier.h 00:02:02.991 TEST_HEADER include/spdk/base64.h 00:02:02.991 CC app/trace_record/trace_record.o 00:02:02.991 TEST_HEADER include/spdk/bdev.h 00:02:02.991 TEST_HEADER include/spdk/bdev_module.h 00:02:02.991 CC test/rpc_client/rpc_client_test.o 00:02:02.991 TEST_HEADER include/spdk/bdev_zone.h 00:02:02.991 TEST_HEADER include/spdk/bit_array.h 00:02:02.991 TEST_HEADER include/spdk/bit_pool.h 00:02:02.991 CC app/spdk_nvme_discover/discovery_aer.o 00:02:02.991 TEST_HEADER include/spdk/blob_bdev.h 00:02:02.991 TEST_HEADER include/spdk/blobfs.h 00:02:02.991 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:02.991 TEST_HEADER include/spdk/conf.h 00:02:02.991 TEST_HEADER include/spdk/blob.h 00:02:02.991 TEST_HEADER include/spdk/config.h 00:02:02.991 TEST_HEADER include/spdk/cpuset.h 00:02:02.991 TEST_HEADER include/spdk/crc16.h 00:02:02.991 TEST_HEADER include/spdk/crc32.h 00:02:02.991 TEST_HEADER include/spdk/dif.h 00:02:02.991 TEST_HEADER include/spdk/crc64.h 00:02:02.991 TEST_HEADER include/spdk/dma.h 00:02:02.991 TEST_HEADER include/spdk/endian.h 00:02:02.991 TEST_HEADER include/spdk/env_dpdk.h 00:02:02.991 TEST_HEADER include/spdk/env.h 00:02:02.991 TEST_HEADER include/spdk/event.h 00:02:02.991 TEST_HEADER include/spdk/fd_group.h 00:02:02.991 TEST_HEADER include/spdk/fd.h 00:02:02.991 TEST_HEADER include/spdk/file.h 00:02:02.991 TEST_HEADER include/spdk/fsdev_module.h 00:02:02.991 TEST_HEADER include/spdk/fsdev.h 00:02:02.991 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:02.991 TEST_HEADER include/spdk/ftl.h 00:02:02.991 CC app/iscsi_tgt/iscsi_tgt.o 00:02:02.991 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:02.991 TEST_HEADER include/spdk/gpt_spec.h 00:02:02.991 TEST_HEADER include/spdk/hexlify.h 00:02:02.991 TEST_HEADER include/spdk/idxd.h 00:02:02.991 TEST_HEADER include/spdk/histogram_data.h 00:02:02.991 TEST_HEADER include/spdk/init.h 00:02:02.991 TEST_HEADER include/spdk/idxd_spec.h 00:02:02.991 TEST_HEADER include/spdk/ioat.h 00:02:02.991 TEST_HEADER include/spdk/ioat_spec.h 00:02:02.991 CC app/spdk_dd/spdk_dd.o 00:02:02.991 TEST_HEADER include/spdk/jsonrpc.h 00:02:02.991 TEST_HEADER include/spdk/iscsi_spec.h 00:02:02.991 TEST_HEADER include/spdk/json.h 00:02:02.991 CC app/nvmf_tgt/nvmf_main.o 00:02:02.991 TEST_HEADER include/spdk/keyring.h 00:02:02.991 TEST_HEADER include/spdk/likely.h 00:02:02.991 TEST_HEADER include/spdk/keyring_module.h 00:02:02.991 TEST_HEADER include/spdk/log.h 00:02:02.991 TEST_HEADER include/spdk/lvol.h 00:02:02.991 TEST_HEADER include/spdk/md5.h 00:02:02.991 TEST_HEADER include/spdk/memory.h 00:02:02.991 TEST_HEADER include/spdk/mmio.h 00:02:02.991 TEST_HEADER include/spdk/nbd.h 00:02:02.991 TEST_HEADER include/spdk/net.h 00:02:02.991 TEST_HEADER include/spdk/notify.h 00:02:02.991 TEST_HEADER include/spdk/nvme.h 00:02:02.991 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:02.991 TEST_HEADER include/spdk/nvme_intel.h 00:02:02.991 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:02.991 TEST_HEADER include/spdk/nvme_spec.h 00:02:02.991 TEST_HEADER include/spdk/nvme_zns.h 00:02:02.991 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:02.991 TEST_HEADER include/spdk/nvmf.h 00:02:02.991 CC app/spdk_tgt/spdk_tgt.o 00:02:02.991 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:02.991 TEST_HEADER include/spdk/nvmf_spec.h 00:02:02.991 TEST_HEADER include/spdk/opal.h 00:02:02.991 TEST_HEADER include/spdk/nvmf_transport.h 00:02:02.991 TEST_HEADER include/spdk/pci_ids.h 00:02:02.991 TEST_HEADER include/spdk/opal_spec.h 00:02:02.991 TEST_HEADER include/spdk/pipe.h 00:02:02.991 TEST_HEADER include/spdk/reduce.h 00:02:02.991 TEST_HEADER include/spdk/rpc.h 00:02:02.991 TEST_HEADER include/spdk/queue.h 00:02:02.991 TEST_HEADER include/spdk/scheduler.h 00:02:02.991 TEST_HEADER include/spdk/scsi.h 00:02:02.991 TEST_HEADER include/spdk/scsi_spec.h 00:02:02.991 TEST_HEADER include/spdk/sock.h 00:02:02.991 TEST_HEADER include/spdk/string.h 00:02:02.991 TEST_HEADER include/spdk/stdinc.h 00:02:02.991 TEST_HEADER include/spdk/trace.h 00:02:02.991 TEST_HEADER include/spdk/thread.h 00:02:02.991 TEST_HEADER include/spdk/trace_parser.h 00:02:02.991 TEST_HEADER include/spdk/ublk.h 00:02:02.991 TEST_HEADER include/spdk/tree.h 00:02:02.992 TEST_HEADER include/spdk/util.h 00:02:02.992 TEST_HEADER include/spdk/version.h 00:02:02.992 TEST_HEADER include/spdk/uuid.h 00:02:02.992 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:02.992 TEST_HEADER include/spdk/vhost.h 00:02:02.992 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:02.992 TEST_HEADER include/spdk/vmd.h 00:02:02.992 TEST_HEADER include/spdk/xor.h 00:02:02.992 TEST_HEADER include/spdk/zipf.h 00:02:02.992 CXX test/cpp_headers/accel.o 00:02:02.992 CXX test/cpp_headers/accel_module.o 00:02:02.992 CXX test/cpp_headers/assert.o 00:02:02.992 CXX test/cpp_headers/barrier.o 00:02:02.992 CXX test/cpp_headers/base64.o 00:02:02.992 CXX test/cpp_headers/bdev.o 00:02:02.992 CXX test/cpp_headers/bdev_zone.o 00:02:02.992 CXX test/cpp_headers/bdev_module.o 00:02:02.992 CXX test/cpp_headers/bit_array.o 00:02:02.992 CXX test/cpp_headers/bit_pool.o 00:02:02.992 CXX test/cpp_headers/blob_bdev.o 00:02:02.992 CXX test/cpp_headers/blobfs_bdev.o 00:02:02.992 CXX test/cpp_headers/blobfs.o 00:02:02.992 CXX test/cpp_headers/blob.o 00:02:02.992 CXX test/cpp_headers/conf.o 00:02:02.992 CXX test/cpp_headers/config.o 00:02:02.992 CXX test/cpp_headers/cpuset.o 00:02:02.992 CXX test/cpp_headers/crc16.o 00:02:02.992 CXX test/cpp_headers/crc32.o 00:02:02.992 CXX test/cpp_headers/dif.o 00:02:02.992 CXX test/cpp_headers/crc64.o 00:02:02.992 CXX test/cpp_headers/dma.o 00:02:02.992 CXX test/cpp_headers/env_dpdk.o 00:02:02.992 CXX test/cpp_headers/endian.o 00:02:02.992 CXX test/cpp_headers/env.o 00:02:02.992 CXX test/cpp_headers/fd_group.o 00:02:02.992 CXX test/cpp_headers/event.o 00:02:02.992 CXX test/cpp_headers/fd.o 00:02:02.992 CXX test/cpp_headers/file.o 00:02:02.992 CXX test/cpp_headers/fsdev.o 00:02:02.992 CXX test/cpp_headers/fsdev_module.o 00:02:02.992 CXX test/cpp_headers/ftl.o 00:02:02.992 CXX test/cpp_headers/fuse_dispatcher.o 00:02:02.992 CXX test/cpp_headers/gpt_spec.o 00:02:02.992 CXX test/cpp_headers/hexlify.o 00:02:02.992 CXX test/cpp_headers/histogram_data.o 00:02:02.992 CXX test/cpp_headers/idxd.o 00:02:03.261 CXX test/cpp_headers/idxd_spec.o 00:02:03.261 CXX test/cpp_headers/ioat.o 00:02:03.261 CXX test/cpp_headers/init.o 00:02:03.261 CXX test/cpp_headers/ioat_spec.o 00:02:03.261 CXX test/cpp_headers/iscsi_spec.o 00:02:03.261 CXX test/cpp_headers/json.o 00:02:03.261 CXX test/cpp_headers/jsonrpc.o 00:02:03.261 CXX test/cpp_headers/keyring.o 00:02:03.261 CXX test/cpp_headers/keyring_module.o 00:02:03.261 CXX test/cpp_headers/likely.o 00:02:03.261 CXX test/cpp_headers/md5.o 00:02:03.261 CXX test/cpp_headers/log.o 00:02:03.261 CXX test/cpp_headers/lvol.o 00:02:03.261 CXX test/cpp_headers/memory.o 00:02:03.261 LINK spdk_lspci 00:02:03.261 CXX test/cpp_headers/mmio.o 00:02:03.261 CXX test/cpp_headers/nbd.o 00:02:03.261 CXX test/cpp_headers/net.o 00:02:03.261 CXX test/cpp_headers/nvme.o 00:02:03.261 CXX test/cpp_headers/notify.o 00:02:03.261 CXX test/cpp_headers/nvme_intel.o 00:02:03.261 CXX test/cpp_headers/nvme_ocssd.o 00:02:03.261 CXX test/cpp_headers/nvme_spec.o 00:02:03.261 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:03.261 CXX test/cpp_headers/nvmf.o 00:02:03.261 CXX test/cpp_headers/nvme_zns.o 00:02:03.261 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:03.261 CXX test/cpp_headers/nvmf_cmd.o 00:02:03.261 CXX test/cpp_headers/opal.o 00:02:03.261 CXX test/cpp_headers/nvmf_spec.o 00:02:03.261 CXX test/cpp_headers/opal_spec.o 00:02:03.261 CXX test/cpp_headers/nvmf_transport.o 00:02:03.261 CXX test/cpp_headers/pci_ids.o 00:02:03.261 CXX test/cpp_headers/reduce.o 00:02:03.261 CXX test/cpp_headers/pipe.o 00:02:03.261 CXX test/cpp_headers/queue.o 00:02:03.261 CC test/thread/poller_perf/poller_perf.o 00:02:03.261 CXX test/cpp_headers/scsi.o 00:02:03.261 CXX test/cpp_headers/rpc.o 00:02:03.261 CXX test/cpp_headers/scheduler.o 00:02:03.261 CXX test/cpp_headers/scsi_spec.o 00:02:03.261 CXX test/cpp_headers/stdinc.o 00:02:03.261 CXX test/cpp_headers/sock.o 00:02:03.261 CXX test/cpp_headers/string.o 00:02:03.261 CXX test/cpp_headers/trace_parser.o 00:02:03.261 CC test/env/pci/pci_ut.o 00:02:03.261 CXX test/cpp_headers/thread.o 00:02:03.261 CXX test/cpp_headers/trace.o 00:02:03.261 CXX test/cpp_headers/tree.o 00:02:03.261 CC examples/ioat/perf/perf.o 00:02:03.261 CXX test/cpp_headers/ublk.o 00:02:03.261 CXX test/cpp_headers/util.o 00:02:03.261 CC test/app/histogram_perf/histogram_perf.o 00:02:03.261 CXX test/cpp_headers/uuid.o 00:02:03.261 CXX test/cpp_headers/version.o 00:02:03.261 CXX test/cpp_headers/vfio_user_pci.o 00:02:03.261 CC test/app/jsoncat/jsoncat.o 00:02:03.261 CXX test/cpp_headers/vmd.o 00:02:03.261 CXX test/cpp_headers/vfio_user_spec.o 00:02:03.261 CXX test/cpp_headers/vhost.o 00:02:03.261 CXX test/cpp_headers/xor.o 00:02:03.261 CXX test/cpp_headers/zipf.o 00:02:03.261 CC examples/ioat/verify/verify.o 00:02:03.261 CC test/app/bdev_svc/bdev_svc.o 00:02:03.261 CC test/env/vtophys/vtophys.o 00:02:03.261 LINK rpc_client_test 00:02:03.261 CC test/app/stub/stub.o 00:02:03.261 CC examples/util/zipf/zipf.o 00:02:03.261 CC app/fio/nvme/fio_plugin.o 00:02:03.261 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:03.261 CC test/dma/test_dma/test_dma.o 00:02:03.261 CC test/env/memory/memory_ut.o 00:02:03.261 LINK spdk_nvme_discover 00:02:03.529 LINK spdk_trace_record 00:02:03.529 CC app/fio/bdev/fio_plugin.o 00:02:03.529 LINK nvmf_tgt 00:02:03.793 LINK interrupt_tgt 00:02:03.793 LINK iscsi_tgt 00:02:03.793 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:03.793 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:03.793 LINK spdk_tgt 00:02:04.056 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:04.056 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:04.056 CC test/env/mem_callbacks/mem_callbacks.o 00:02:04.056 LINK spdk_trace 00:02:04.056 LINK vtophys 00:02:04.316 LINK jsoncat 00:02:04.316 LINK spdk_dd 00:02:04.316 LINK bdev_svc 00:02:04.316 LINK histogram_perf 00:02:04.316 LINK ioat_perf 00:02:04.316 LINK env_dpdk_post_init 00:02:04.316 LINK verify 00:02:04.576 LINK poller_perf 00:02:04.576 LINK zipf 00:02:04.576 CC app/vhost/vhost.o 00:02:04.576 LINK stub 00:02:04.576 LINK nvme_fuzz 00:02:04.576 LINK vhost_fuzz 00:02:04.840 LINK spdk_bdev 00:02:04.840 LINK spdk_nvme_perf 00:02:04.840 LINK spdk_nvme 00:02:04.840 LINK test_dma 00:02:04.840 LINK vhost 00:02:04.840 LINK spdk_top 00:02:04.840 LINK pci_ut 00:02:04.840 LINK mem_callbacks 00:02:05.103 LINK spdk_nvme_identify 00:02:05.103 CC test/event/event_perf/event_perf.o 00:02:05.103 CC test/event/reactor/reactor.o 00:02:05.103 CC test/event/reactor_perf/reactor_perf.o 00:02:05.103 CC test/event/app_repeat/app_repeat.o 00:02:05.103 CC test/event/scheduler/scheduler.o 00:02:05.103 CC examples/sock/hello_world/hello_sock.o 00:02:05.103 CC examples/idxd/perf/perf.o 00:02:05.103 CC examples/vmd/led/led.o 00:02:05.103 CC examples/vmd/lsvmd/lsvmd.o 00:02:05.103 CC examples/thread/thread/thread_ex.o 00:02:05.103 LINK event_perf 00:02:05.103 LINK reactor 00:02:05.103 LINK reactor_perf 00:02:05.365 LINK app_repeat 00:02:05.365 LINK lsvmd 00:02:05.365 LINK led 00:02:05.365 LINK memory_ut 00:02:05.365 LINK scheduler 00:02:05.365 CC test/nvme/sgl/sgl.o 00:02:05.365 CC test/nvme/overhead/overhead.o 00:02:05.365 CC test/nvme/e2edp/nvme_dp.o 00:02:05.365 LINK hello_sock 00:02:05.365 CC test/nvme/simple_copy/simple_copy.o 00:02:05.365 CC test/nvme/aer/aer.o 00:02:05.365 CC test/nvme/reserve/reserve.o 00:02:05.365 CC test/nvme/reset/reset.o 00:02:05.365 CC test/nvme/startup/startup.o 00:02:05.365 CC test/nvme/connect_stress/connect_stress.o 00:02:05.365 CC test/nvme/compliance/nvme_compliance.o 00:02:05.365 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:05.365 CC test/nvme/err_injection/err_injection.o 00:02:05.365 CC test/nvme/cuse/cuse.o 00:02:05.365 CC test/blobfs/mkfs/mkfs.o 00:02:05.365 CC test/nvme/fdp/fdp.o 00:02:05.365 CC test/nvme/boot_partition/boot_partition.o 00:02:05.365 CC test/nvme/fused_ordering/fused_ordering.o 00:02:05.365 CC test/accel/dif/dif.o 00:02:05.365 LINK idxd_perf 00:02:05.365 LINK thread 00:02:05.627 CC test/lvol/esnap/esnap.o 00:02:05.627 LINK startup 00:02:05.627 LINK boot_partition 00:02:05.627 LINK connect_stress 00:02:05.627 LINK reserve 00:02:05.627 LINK err_injection 00:02:05.627 LINK fused_ordering 00:02:05.627 LINK doorbell_aers 00:02:05.627 LINK simple_copy 00:02:05.627 LINK mkfs 00:02:05.627 LINK nvme_dp 00:02:05.627 LINK sgl 00:02:05.627 LINK overhead 00:02:05.627 LINK reset 00:02:05.627 LINK iscsi_fuzz 00:02:05.628 LINK aer 00:02:05.889 LINK nvme_compliance 00:02:05.889 LINK fdp 00:02:05.889 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:05.889 CC examples/nvme/reconnect/reconnect.o 00:02:05.889 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:05.889 CC examples/nvme/arbitration/arbitration.o 00:02:05.889 CC examples/nvme/hello_world/hello_world.o 00:02:05.889 CC examples/nvme/hotplug/hotplug.o 00:02:05.889 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:05.889 CC examples/nvme/abort/abort.o 00:02:06.150 LINK dif 00:02:06.150 CC examples/accel/perf/accel_perf.o 00:02:06.150 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:06.150 CC examples/blob/hello_world/hello_blob.o 00:02:06.150 CC examples/blob/cli/blobcli.o 00:02:06.150 LINK cmb_copy 00:02:06.150 LINK pmr_persistence 00:02:06.150 LINK hello_world 00:02:06.150 LINK hotplug 00:02:06.412 LINK reconnect 00:02:06.412 LINK arbitration 00:02:06.412 LINK abort 00:02:06.412 LINK hello_blob 00:02:06.412 LINK hello_fsdev 00:02:06.412 LINK nvme_manage 00:02:06.673 LINK accel_perf 00:02:06.673 LINK blobcli 00:02:06.673 LINK cuse 00:02:06.673 CC test/bdev/bdevio/bdevio.o 00:02:07.247 LINK bdevio 00:02:07.247 CC examples/bdev/hello_world/hello_bdev.o 00:02:07.247 CC examples/bdev/bdevperf/bdevperf.o 00:02:07.510 LINK hello_bdev 00:02:07.772 LINK bdevperf 00:02:08.716 CC examples/nvmf/nvmf/nvmf.o 00:02:08.716 LINK nvmf 00:02:09.662 LINK esnap 00:02:09.923 00:02:09.923 real 0m55.120s 00:02:09.923 user 8m8.425s 00:02:09.923 sys 6m4.720s 00:02:09.923 18:18:03 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:09.923 18:18:03 make -- common/autotest_common.sh@10 -- $ set +x 00:02:09.923 ************************************ 00:02:09.923 END TEST make 00:02:09.923 ************************************ 00:02:09.923 18:18:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:09.923 18:18:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:09.923 18:18:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:09.923 18:18:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.923 18:18:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:09.923 18:18:03 -- pm/common@44 -- $ pid=894376 00:02:09.923 18:18:03 -- pm/common@50 -- $ kill -TERM 894376 00:02:09.923 18:18:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.923 18:18:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:09.923 18:18:03 -- pm/common@44 -- $ pid=894377 00:02:09.923 18:18:03 -- pm/common@50 -- $ kill -TERM 894377 00:02:09.923 18:18:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.923 18:18:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:09.923 18:18:03 -- pm/common@44 -- $ pid=894379 00:02:09.923 18:18:03 -- pm/common@50 -- $ kill -TERM 894379 00:02:09.923 18:18:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.923 18:18:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:09.923 18:18:03 -- pm/common@44 -- $ pid=894402 00:02:09.923 18:18:03 -- pm/common@50 -- $ sudo -E kill -TERM 894402 00:02:10.186 18:18:04 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:10.186 18:18:04 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:10.186 18:18:04 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:10.186 18:18:04 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:10.186 18:18:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:10.186 18:18:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:10.186 18:18:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:10.186 18:18:04 -- scripts/common.sh@336 -- # IFS=.-: 00:02:10.186 18:18:04 -- scripts/common.sh@336 -- # read -ra ver1 00:02:10.186 18:18:04 -- scripts/common.sh@337 -- # IFS=.-: 00:02:10.186 18:18:04 -- scripts/common.sh@337 -- # read -ra ver2 00:02:10.186 18:18:04 -- scripts/common.sh@338 -- # local 'op=<' 00:02:10.186 18:18:04 -- scripts/common.sh@340 -- # ver1_l=2 00:02:10.186 18:18:04 -- scripts/common.sh@341 -- # ver2_l=1 00:02:10.186 18:18:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:10.186 18:18:04 -- scripts/common.sh@344 -- # case "$op" in 00:02:10.186 18:18:04 -- scripts/common.sh@345 -- # : 1 00:02:10.186 18:18:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:10.186 18:18:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:10.186 18:18:04 -- scripts/common.sh@365 -- # decimal 1 00:02:10.186 18:18:04 -- scripts/common.sh@353 -- # local d=1 00:02:10.186 18:18:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:10.186 18:18:04 -- scripts/common.sh@355 -- # echo 1 00:02:10.186 18:18:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:10.186 18:18:04 -- scripts/common.sh@366 -- # decimal 2 00:02:10.186 18:18:04 -- scripts/common.sh@353 -- # local d=2 00:02:10.186 18:18:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:10.186 18:18:04 -- scripts/common.sh@355 -- # echo 2 00:02:10.186 18:18:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:10.186 18:18:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:10.186 18:18:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:10.186 18:18:04 -- scripts/common.sh@368 -- # return 0 00:02:10.186 18:18:04 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:10.186 18:18:04 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:10.186 --rc genhtml_branch_coverage=1 00:02:10.186 --rc genhtml_function_coverage=1 00:02:10.186 --rc genhtml_legend=1 00:02:10.186 --rc geninfo_all_blocks=1 00:02:10.186 --rc geninfo_unexecuted_blocks=1 00:02:10.186 00:02:10.186 ' 00:02:10.186 18:18:04 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:10.186 --rc genhtml_branch_coverage=1 00:02:10.186 --rc genhtml_function_coverage=1 00:02:10.186 --rc genhtml_legend=1 00:02:10.186 --rc geninfo_all_blocks=1 00:02:10.186 --rc geninfo_unexecuted_blocks=1 00:02:10.186 00:02:10.186 ' 00:02:10.186 18:18:04 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:10.186 --rc genhtml_branch_coverage=1 00:02:10.186 --rc genhtml_function_coverage=1 00:02:10.186 --rc genhtml_legend=1 00:02:10.186 --rc geninfo_all_blocks=1 00:02:10.186 --rc geninfo_unexecuted_blocks=1 00:02:10.186 00:02:10.186 ' 00:02:10.186 18:18:04 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:10.186 --rc genhtml_branch_coverage=1 00:02:10.186 --rc genhtml_function_coverage=1 00:02:10.186 --rc genhtml_legend=1 00:02:10.186 --rc geninfo_all_blocks=1 00:02:10.186 --rc geninfo_unexecuted_blocks=1 00:02:10.186 00:02:10.186 ' 00:02:10.186 18:18:04 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:10.186 18:18:04 -- nvmf/common.sh@7 -- # uname -s 00:02:10.186 18:18:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:10.186 18:18:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:10.186 18:18:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:10.186 18:18:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:10.186 18:18:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:10.186 18:18:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:10.186 18:18:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:10.186 18:18:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:10.186 18:18:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:10.186 18:18:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:10.186 18:18:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:10.186 18:18:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:10.186 18:18:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:10.186 18:18:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:10.186 18:18:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:10.186 18:18:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:10.186 18:18:04 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:10.186 18:18:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:10.186 18:18:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:10.186 18:18:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.186 18:18:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.186 18:18:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.186 18:18:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.186 18:18:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.186 18:18:04 -- paths/export.sh@5 -- # export PATH 00:02:10.186 18:18:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.186 18:18:04 -- nvmf/common.sh@51 -- # : 0 00:02:10.186 18:18:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:10.186 18:18:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:10.186 18:18:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:10.186 18:18:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:10.186 18:18:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:10.186 18:18:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:10.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:10.186 18:18:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:10.186 18:18:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:10.186 18:18:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:10.186 18:18:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:10.186 18:18:04 -- spdk/autotest.sh@32 -- # uname -s 00:02:10.186 18:18:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:10.186 18:18:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:10.186 18:18:04 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:10.186 18:18:04 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:10.186 18:18:04 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:10.186 18:18:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:10.186 18:18:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:10.186 18:18:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:10.186 18:18:04 -- spdk/autotest.sh@48 -- # udevadm_pid=959713 00:02:10.187 18:18:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:10.187 18:18:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:10.187 18:18:04 -- pm/common@17 -- # local monitor 00:02:10.187 18:18:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.187 18:18:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.187 18:18:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.187 18:18:04 -- pm/common@21 -- # date +%s 00:02:10.187 18:18:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.187 18:18:04 -- pm/common@21 -- # date +%s 00:02:10.187 18:18:04 -- pm/common@25 -- # sleep 1 00:02:10.187 18:18:04 -- pm/common@21 -- # date +%s 00:02:10.187 18:18:04 -- pm/common@21 -- # date +%s 00:02:10.187 18:18:04 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728404284 00:02:10.187 18:18:04 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728404284 00:02:10.187 18:18:04 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728404284 00:02:10.187 18:18:04 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728404284 00:02:10.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728404284_collect-vmstat.pm.log 00:02:10.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728404284_collect-cpu-load.pm.log 00:02:10.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728404284_collect-cpu-temp.pm.log 00:02:10.448 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728404284_collect-bmc-pm.bmc.pm.log 00:02:11.394 18:18:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:11.394 18:18:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:11.394 18:18:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:11.394 18:18:05 -- common/autotest_common.sh@10 -- # set +x 00:02:11.394 18:18:05 -- spdk/autotest.sh@59 -- # create_test_list 00:02:11.394 18:18:05 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:11.394 18:18:05 -- common/autotest_common.sh@10 -- # set +x 00:02:11.394 18:18:05 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:11.394 18:18:05 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.394 18:18:05 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.394 18:18:05 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:11.394 18:18:05 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.394 18:18:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:11.394 18:18:05 -- common/autotest_common.sh@1455 -- # uname 00:02:11.394 18:18:05 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:11.394 18:18:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:11.394 18:18:05 -- common/autotest_common.sh@1475 -- # uname 00:02:11.394 18:18:05 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:11.394 18:18:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:11.394 18:18:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:11.394 lcov: LCOV version 1.15 00:02:11.394 18:18:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:26.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:26.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:44.460 18:18:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:44.460 18:18:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:44.460 18:18:35 -- common/autotest_common.sh@10 -- # set +x 00:02:44.460 18:18:35 -- spdk/autotest.sh@78 -- # rm -f 00:02:44.460 18:18:35 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.033 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:45.033 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:45.033 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:45.033 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:45.033 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:45.033 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:45.033 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:45.295 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:45.296 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:45.296 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:45.296 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:45.296 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:45.296 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:45.296 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:45.296 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:45.296 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:45.296 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:45.870 18:18:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:45.870 18:18:39 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:45.870 18:18:39 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:45.870 18:18:39 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:45.870 18:18:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:45.870 18:18:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:45.870 18:18:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:45.870 18:18:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:45.870 18:18:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:45.870 18:18:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:45.870 18:18:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:45.870 18:18:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:45.870 18:18:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:45.870 18:18:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:45.870 18:18:39 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:45.870 No valid GPT data, bailing 00:02:45.870 18:18:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:45.870 18:18:39 -- scripts/common.sh@394 -- # pt= 00:02:45.870 18:18:39 -- scripts/common.sh@395 -- # return 1 00:02:45.870 18:18:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:45.870 1+0 records in 00:02:45.870 1+0 records out 00:02:45.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467954 s, 224 MB/s 00:02:45.870 18:18:39 -- spdk/autotest.sh@105 -- # sync 00:02:45.870 18:18:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:45.870 18:18:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:45.870 18:18:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:55.885 18:18:48 -- spdk/autotest.sh@111 -- # uname -s 00:02:55.885 18:18:48 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:55.885 18:18:48 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:55.885 18:18:48 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:57.880 Hugepages 00:02:57.880 node hugesize free / total 00:02:57.880 node0 1048576kB 0 / 0 00:02:57.880 node0 2048kB 0 / 0 00:02:57.880 node1 1048576kB 0 / 0 00:02:57.880 node1 2048kB 0 / 0 00:02:57.880 00:02:57.880 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.880 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:57.881 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:57.881 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:57.881 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:57.881 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:57.881 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:57.881 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:57.881 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:58.204 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:58.204 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:58.204 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:58.204 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:58.204 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:58.204 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:58.204 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:58.204 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:58.204 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:58.204 18:18:52 -- spdk/autotest.sh@117 -- # uname -s 00:02:58.204 18:18:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:58.204 18:18:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:58.204 18:18:52 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:01.563 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:01.563 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:01.563 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:01.563 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:01.563 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:01.563 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:01.563 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:01.825 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:01.825 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:01.825 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:01.825 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:01.825 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:01.825 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:01.825 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:01.825 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:01.825 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:03.743 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:04.005 18:18:57 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:04.948 18:18:58 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:04.948 18:18:58 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:04.948 18:18:58 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:04.948 18:18:58 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:04.948 18:18:58 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:04.948 18:18:58 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:04.948 18:18:58 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:04.948 18:18:58 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:04.948 18:18:58 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:04.948 18:18:58 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:04.948 18:18:58 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:04.948 18:18:58 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.162 Waiting for block devices as requested 00:03:09.162 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:09.162 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:09.162 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:09.162 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:09.162 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:09.162 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:09.162 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:09.162 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:09.162 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:09.423 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:09.423 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:09.423 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:09.423 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:09.685 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:09.685 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:09.685 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:09.946 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:10.208 18:19:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:10.208 18:19:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:10.208 18:19:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:10.208 18:19:04 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:10.208 18:19:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:10.208 18:19:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:10.208 18:19:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:10.208 18:19:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:10.208 18:19:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:10.208 18:19:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:10.208 18:19:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:10.208 18:19:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:10.208 18:19:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:10.208 18:19:04 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:10.208 18:19:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:10.208 18:19:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:10.208 18:19:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:10.208 18:19:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:10.208 18:19:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:10.208 18:19:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:10.208 18:19:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:10.208 18:19:04 -- common/autotest_common.sh@1541 -- # continue 00:03:10.208 18:19:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:10.208 18:19:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:10.208 18:19:04 -- common/autotest_common.sh@10 -- # set +x 00:03:10.208 18:19:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:10.208 18:19:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:10.208 18:19:04 -- common/autotest_common.sh@10 -- # set +x 00:03:10.208 18:19:04 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.418 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:14.418 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:14.418 18:19:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:14.418 18:19:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:14.418 18:19:08 -- common/autotest_common.sh@10 -- # set +x 00:03:14.418 18:19:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:14.418 18:19:08 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:14.418 18:19:08 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:14.418 18:19:08 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:14.418 18:19:08 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:14.418 18:19:08 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:14.418 18:19:08 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:14.418 18:19:08 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:14.418 18:19:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:14.418 18:19:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:14.418 18:19:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:14.418 18:19:08 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:14.418 18:19:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:14.418 18:19:08 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:14.418 18:19:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:14.418 18:19:08 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:14.418 18:19:08 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:14.418 18:19:08 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:14.418 18:19:08 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:14.418 18:19:08 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:14.418 18:19:08 -- common/autotest_common.sh@1570 -- # return 0 00:03:14.418 18:19:08 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:14.418 18:19:08 -- common/autotest_common.sh@1578 -- # return 0 00:03:14.418 18:19:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:14.418 18:19:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:14.418 18:19:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:14.418 18:19:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:14.418 18:19:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:14.418 18:19:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:14.418 18:19:08 -- common/autotest_common.sh@10 -- # set +x 00:03:14.418 18:19:08 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:14.418 18:19:08 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:14.418 18:19:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:14.418 18:19:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:14.418 18:19:08 -- common/autotest_common.sh@10 -- # set +x 00:03:14.418 ************************************ 00:03:14.418 START TEST env 00:03:14.418 ************************************ 00:03:14.418 18:19:08 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:14.680 * Looking for test storage... 00:03:14.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1681 -- # lcov --version 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:14.680 18:19:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:14.680 18:19:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:14.680 18:19:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:14.680 18:19:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:14.680 18:19:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:14.680 18:19:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:14.680 18:19:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:14.680 18:19:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:14.680 18:19:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:14.680 18:19:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:14.680 18:19:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:14.680 18:19:08 env -- scripts/common.sh@344 -- # case "$op" in 00:03:14.680 18:19:08 env -- scripts/common.sh@345 -- # : 1 00:03:14.680 18:19:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:14.680 18:19:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.680 18:19:08 env -- scripts/common.sh@365 -- # decimal 1 00:03:14.680 18:19:08 env -- scripts/common.sh@353 -- # local d=1 00:03:14.680 18:19:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:14.680 18:19:08 env -- scripts/common.sh@355 -- # echo 1 00:03:14.680 18:19:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:14.680 18:19:08 env -- scripts/common.sh@366 -- # decimal 2 00:03:14.680 18:19:08 env -- scripts/common.sh@353 -- # local d=2 00:03:14.680 18:19:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:14.680 18:19:08 env -- scripts/common.sh@355 -- # echo 2 00:03:14.680 18:19:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:14.680 18:19:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:14.680 18:19:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:14.680 18:19:08 env -- scripts/common.sh@368 -- # return 0 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:14.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.680 --rc genhtml_branch_coverage=1 00:03:14.680 --rc genhtml_function_coverage=1 00:03:14.680 --rc genhtml_legend=1 00:03:14.680 --rc geninfo_all_blocks=1 00:03:14.680 --rc geninfo_unexecuted_blocks=1 00:03:14.680 00:03:14.680 ' 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:14.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.680 --rc genhtml_branch_coverage=1 00:03:14.680 --rc genhtml_function_coverage=1 00:03:14.680 --rc genhtml_legend=1 00:03:14.680 --rc geninfo_all_blocks=1 00:03:14.680 --rc geninfo_unexecuted_blocks=1 00:03:14.680 00:03:14.680 ' 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:14.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.680 --rc genhtml_branch_coverage=1 00:03:14.680 --rc genhtml_function_coverage=1 00:03:14.680 --rc genhtml_legend=1 00:03:14.680 --rc geninfo_all_blocks=1 00:03:14.680 --rc geninfo_unexecuted_blocks=1 00:03:14.680 00:03:14.680 ' 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:14.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.680 --rc genhtml_branch_coverage=1 00:03:14.680 --rc genhtml_function_coverage=1 00:03:14.680 --rc genhtml_legend=1 00:03:14.680 --rc geninfo_all_blocks=1 00:03:14.680 --rc geninfo_unexecuted_blocks=1 00:03:14.680 00:03:14.680 ' 00:03:14.680 18:19:08 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:14.680 18:19:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:14.680 18:19:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:14.680 ************************************ 00:03:14.680 START TEST env_memory 00:03:14.680 ************************************ 00:03:14.680 18:19:08 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:14.680 00:03:14.680 00:03:14.680 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.680 http://cunit.sourceforge.net/ 00:03:14.680 00:03:14.680 00:03:14.680 Suite: memory 00:03:14.680 Test: alloc and free memory map ...[2024-10-08 18:19:08.710018] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:14.680 passed 00:03:14.680 Test: mem map translation ...[2024-10-08 18:19:08.735608] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:14.680 [2024-10-08 18:19:08.735640] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:14.680 [2024-10-08 18:19:08.735686] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:14.680 [2024-10-08 18:19:08.735694] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:14.942 passed 00:03:14.942 Test: mem map registration ...[2024-10-08 18:19:08.790971] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:14.942 [2024-10-08 18:19:08.791002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:14.942 passed 00:03:14.942 Test: mem map adjacent registrations ...passed 00:03:14.942 00:03:14.942 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.942 suites 1 1 n/a 0 0 00:03:14.942 tests 4 4 4 0 0 00:03:14.942 asserts 152 152 152 0 n/a 00:03:14.942 00:03:14.942 Elapsed time = 0.194 seconds 00:03:14.942 00:03:14.942 real 0m0.209s 00:03:14.942 user 0m0.194s 00:03:14.942 sys 0m0.014s 00:03:14.942 18:19:08 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:14.942 18:19:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:14.942 ************************************ 00:03:14.942 END TEST env_memory 00:03:14.942 ************************************ 00:03:14.942 18:19:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:14.942 18:19:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:14.942 18:19:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:14.942 18:19:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:14.942 ************************************ 00:03:14.942 START TEST env_vtophys 00:03:14.942 ************************************ 00:03:14.942 18:19:08 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:14.942 EAL: lib.eal log level changed from notice to debug 00:03:14.942 EAL: Detected lcore 0 as core 0 on socket 0 00:03:14.942 EAL: Detected lcore 1 as core 1 on socket 0 00:03:14.942 EAL: Detected lcore 2 as core 2 on socket 0 00:03:14.942 EAL: Detected lcore 3 as core 3 on socket 0 00:03:14.942 EAL: Detected lcore 4 as core 4 on socket 0 00:03:14.942 EAL: Detected lcore 5 as core 5 on socket 0 00:03:14.942 EAL: Detected lcore 6 as core 6 on socket 0 00:03:14.942 EAL: Detected lcore 7 as core 7 on socket 0 00:03:14.942 EAL: Detected lcore 8 as core 8 on socket 0 00:03:14.942 EAL: Detected lcore 9 as core 9 on socket 0 00:03:14.942 EAL: Detected lcore 10 as core 10 on socket 0 00:03:14.942 EAL: Detected lcore 11 as core 11 on socket 0 00:03:14.942 EAL: Detected lcore 12 as core 12 on socket 0 00:03:14.942 EAL: Detected lcore 13 as core 13 on socket 0 00:03:14.942 EAL: Detected lcore 14 as core 14 on socket 0 00:03:14.942 EAL: Detected lcore 15 as core 15 on socket 0 00:03:14.942 EAL: Detected lcore 16 as core 16 on socket 0 00:03:14.942 EAL: Detected lcore 17 as core 17 on socket 0 00:03:14.942 EAL: Detected lcore 18 as core 18 on socket 0 00:03:14.942 EAL: Detected lcore 19 as core 19 on socket 0 00:03:14.942 EAL: Detected lcore 20 as core 20 on socket 0 00:03:14.942 EAL: Detected lcore 21 as core 21 on socket 0 00:03:14.942 EAL: Detected lcore 22 as core 22 on socket 0 00:03:14.942 EAL: Detected lcore 23 as core 23 on socket 0 00:03:14.942 EAL: Detected lcore 24 as core 24 on socket 0 00:03:14.942 EAL: Detected lcore 25 as core 25 on socket 0 00:03:14.942 EAL: Detected lcore 26 as core 26 on socket 0 00:03:14.942 EAL: Detected lcore 27 as core 27 on socket 0 00:03:14.942 EAL: Detected lcore 28 as core 28 on socket 0 00:03:14.942 EAL: Detected lcore 29 as core 29 on socket 0 00:03:14.942 EAL: Detected lcore 30 as core 30 on socket 0 00:03:14.942 EAL: Detected lcore 31 as core 31 on socket 0 00:03:14.942 EAL: Detected lcore 32 as core 32 on socket 0 00:03:14.942 EAL: Detected lcore 33 as core 33 on socket 0 00:03:14.942 EAL: Detected lcore 34 as core 34 on socket 0 00:03:14.943 EAL: Detected lcore 35 as core 35 on socket 0 00:03:14.943 EAL: Detected lcore 36 as core 0 on socket 1 00:03:14.943 EAL: Detected lcore 37 as core 1 on socket 1 00:03:14.943 EAL: Detected lcore 38 as core 2 on socket 1 00:03:14.943 EAL: Detected lcore 39 as core 3 on socket 1 00:03:14.943 EAL: Detected lcore 40 as core 4 on socket 1 00:03:14.943 EAL: Detected lcore 41 as core 5 on socket 1 00:03:14.943 EAL: Detected lcore 42 as core 6 on socket 1 00:03:14.943 EAL: Detected lcore 43 as core 7 on socket 1 00:03:14.943 EAL: Detected lcore 44 as core 8 on socket 1 00:03:14.943 EAL: Detected lcore 45 as core 9 on socket 1 00:03:14.943 EAL: Detected lcore 46 as core 10 on socket 1 00:03:14.943 EAL: Detected lcore 47 as core 11 on socket 1 00:03:14.943 EAL: Detected lcore 48 as core 12 on socket 1 00:03:14.943 EAL: Detected lcore 49 as core 13 on socket 1 00:03:14.943 EAL: Detected lcore 50 as core 14 on socket 1 00:03:14.943 EAL: Detected lcore 51 as core 15 on socket 1 00:03:14.943 EAL: Detected lcore 52 as core 16 on socket 1 00:03:14.943 EAL: Detected lcore 53 as core 17 on socket 1 00:03:14.943 EAL: Detected lcore 54 as core 18 on socket 1 00:03:14.943 EAL: Detected lcore 55 as core 19 on socket 1 00:03:14.943 EAL: Detected lcore 56 as core 20 on socket 1 00:03:14.943 EAL: Detected lcore 57 as core 21 on socket 1 00:03:14.943 EAL: Detected lcore 58 as core 22 on socket 1 00:03:14.943 EAL: Detected lcore 59 as core 23 on socket 1 00:03:14.943 EAL: Detected lcore 60 as core 24 on socket 1 00:03:14.943 EAL: Detected lcore 61 as core 25 on socket 1 00:03:14.943 EAL: Detected lcore 62 as core 26 on socket 1 00:03:14.943 EAL: Detected lcore 63 as core 27 on socket 1 00:03:14.943 EAL: Detected lcore 64 as core 28 on socket 1 00:03:14.943 EAL: Detected lcore 65 as core 29 on socket 1 00:03:14.943 EAL: Detected lcore 66 as core 30 on socket 1 00:03:14.943 EAL: Detected lcore 67 as core 31 on socket 1 00:03:14.943 EAL: Detected lcore 68 as core 32 on socket 1 00:03:14.943 EAL: Detected lcore 69 as core 33 on socket 1 00:03:14.943 EAL: Detected lcore 70 as core 34 on socket 1 00:03:14.943 EAL: Detected lcore 71 as core 35 on socket 1 00:03:14.943 EAL: Detected lcore 72 as core 0 on socket 0 00:03:14.943 EAL: Detected lcore 73 as core 1 on socket 0 00:03:14.943 EAL: Detected lcore 74 as core 2 on socket 0 00:03:14.943 EAL: Detected lcore 75 as core 3 on socket 0 00:03:14.943 EAL: Detected lcore 76 as core 4 on socket 0 00:03:14.943 EAL: Detected lcore 77 as core 5 on socket 0 00:03:14.943 EAL: Detected lcore 78 as core 6 on socket 0 00:03:14.943 EAL: Detected lcore 79 as core 7 on socket 0 00:03:14.943 EAL: Detected lcore 80 as core 8 on socket 0 00:03:14.943 EAL: Detected lcore 81 as core 9 on socket 0 00:03:14.943 EAL: Detected lcore 82 as core 10 on socket 0 00:03:14.943 EAL: Detected lcore 83 as core 11 on socket 0 00:03:14.943 EAL: Detected lcore 84 as core 12 on socket 0 00:03:14.943 EAL: Detected lcore 85 as core 13 on socket 0 00:03:14.943 EAL: Detected lcore 86 as core 14 on socket 0 00:03:14.943 EAL: Detected lcore 87 as core 15 on socket 0 00:03:14.943 EAL: Detected lcore 88 as core 16 on socket 0 00:03:14.943 EAL: Detected lcore 89 as core 17 on socket 0 00:03:14.943 EAL: Detected lcore 90 as core 18 on socket 0 00:03:14.943 EAL: Detected lcore 91 as core 19 on socket 0 00:03:14.943 EAL: Detected lcore 92 as core 20 on socket 0 00:03:14.943 EAL: Detected lcore 93 as core 21 on socket 0 00:03:14.943 EAL: Detected lcore 94 as core 22 on socket 0 00:03:14.943 EAL: Detected lcore 95 as core 23 on socket 0 00:03:14.943 EAL: Detected lcore 96 as core 24 on socket 0 00:03:14.943 EAL: Detected lcore 97 as core 25 on socket 0 00:03:14.943 EAL: Detected lcore 98 as core 26 on socket 0 00:03:14.943 EAL: Detected lcore 99 as core 27 on socket 0 00:03:14.943 EAL: Detected lcore 100 as core 28 on socket 0 00:03:14.943 EAL: Detected lcore 101 as core 29 on socket 0 00:03:14.943 EAL: Detected lcore 102 as core 30 on socket 0 00:03:14.943 EAL: Detected lcore 103 as core 31 on socket 0 00:03:14.943 EAL: Detected lcore 104 as core 32 on socket 0 00:03:14.943 EAL: Detected lcore 105 as core 33 on socket 0 00:03:14.943 EAL: Detected lcore 106 as core 34 on socket 0 00:03:14.943 EAL: Detected lcore 107 as core 35 on socket 0 00:03:14.943 EAL: Detected lcore 108 as core 0 on socket 1 00:03:14.943 EAL: Detected lcore 109 as core 1 on socket 1 00:03:14.943 EAL: Detected lcore 110 as core 2 on socket 1 00:03:14.943 EAL: Detected lcore 111 as core 3 on socket 1 00:03:14.943 EAL: Detected lcore 112 as core 4 on socket 1 00:03:14.943 EAL: Detected lcore 113 as core 5 on socket 1 00:03:14.943 EAL: Detected lcore 114 as core 6 on socket 1 00:03:14.943 EAL: Detected lcore 115 as core 7 on socket 1 00:03:14.943 EAL: Detected lcore 116 as core 8 on socket 1 00:03:14.943 EAL: Detected lcore 117 as core 9 on socket 1 00:03:14.943 EAL: Detected lcore 118 as core 10 on socket 1 00:03:14.943 EAL: Detected lcore 119 as core 11 on socket 1 00:03:14.943 EAL: Detected lcore 120 as core 12 on socket 1 00:03:14.943 EAL: Detected lcore 121 as core 13 on socket 1 00:03:14.943 EAL: Detected lcore 122 as core 14 on socket 1 00:03:14.943 EAL: Detected lcore 123 as core 15 on socket 1 00:03:14.943 EAL: Detected lcore 124 as core 16 on socket 1 00:03:14.943 EAL: Detected lcore 125 as core 17 on socket 1 00:03:14.943 EAL: Detected lcore 126 as core 18 on socket 1 00:03:14.943 EAL: Detected lcore 127 as core 19 on socket 1 00:03:14.943 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:14.943 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:14.943 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:14.943 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:14.943 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:14.943 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:14.943 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:14.943 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:14.943 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:14.943 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:14.943 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:14.943 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:14.943 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:14.943 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:14.943 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:14.943 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:14.943 EAL: Maximum logical cores by configuration: 128 00:03:14.943 EAL: Detected CPU lcores: 128 00:03:14.943 EAL: Detected NUMA nodes: 2 00:03:14.943 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:14.943 EAL: Detected shared linkage of DPDK 00:03:14.943 EAL: No shared files mode enabled, IPC will be disabled 00:03:14.943 EAL: Bus pci wants IOVA as 'DC' 00:03:14.943 EAL: Buses did not request a specific IOVA mode. 00:03:14.943 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:14.943 EAL: Selected IOVA mode 'VA' 00:03:14.943 EAL: Probing VFIO support... 00:03:14.943 EAL: IOMMU type 1 (Type 1) is supported 00:03:14.943 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:14.943 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:14.943 EAL: VFIO support initialized 00:03:14.943 EAL: Ask a virtual area of 0x2e000 bytes 00:03:14.943 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:15.204 EAL: Setting up physically contiguous memory... 00:03:15.204 EAL: Setting maximum number of open files to 524288 00:03:15.204 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:15.204 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:15.204 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:15.204 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.204 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:15.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.204 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.204 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:15.204 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:15.204 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.204 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:15.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.204 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.204 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:15.204 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:15.204 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.204 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:15.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.204 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.204 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:15.204 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:15.204 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.204 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:15.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.204 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.204 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:15.204 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:15.204 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:15.204 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.204 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:15.204 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.204 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.204 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:15.204 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:15.204 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.204 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:15.204 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.204 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.205 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:15.205 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:15.205 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.205 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:15.205 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.205 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.205 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:15.205 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:15.205 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.205 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:15.205 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.205 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.205 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:15.205 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:15.205 EAL: Hugepages will be freed exactly as allocated. 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: TSC frequency is ~2400000 KHz 00:03:15.205 EAL: Main lcore 0 is ready (tid=7f94c3ecda00;cpuset=[0]) 00:03:15.205 EAL: Trying to obtain current memory policy. 00:03:15.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.205 EAL: Restoring previous memory policy: 0 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was expanded by 2MB 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:15.205 EAL: Mem event callback 'spdk:(nil)' registered 00:03:15.205 00:03:15.205 00:03:15.205 CUnit - A unit testing framework for C - Version 2.1-3 00:03:15.205 http://cunit.sourceforge.net/ 00:03:15.205 00:03:15.205 00:03:15.205 Suite: components_suite 00:03:15.205 Test: vtophys_malloc_test ...passed 00:03:15.205 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:15.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.205 EAL: Restoring previous memory policy: 4 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was expanded by 4MB 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was shrunk by 4MB 00:03:15.205 EAL: Trying to obtain current memory policy. 00:03:15.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.205 EAL: Restoring previous memory policy: 4 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was expanded by 6MB 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was shrunk by 6MB 00:03:15.205 EAL: Trying to obtain current memory policy. 00:03:15.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.205 EAL: Restoring previous memory policy: 4 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was expanded by 10MB 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was shrunk by 10MB 00:03:15.205 EAL: Trying to obtain current memory policy. 00:03:15.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.205 EAL: Restoring previous memory policy: 4 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was expanded by 18MB 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was shrunk by 18MB 00:03:15.205 EAL: Trying to obtain current memory policy. 00:03:15.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.205 EAL: Restoring previous memory policy: 4 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was expanded by 34MB 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was shrunk by 34MB 00:03:15.205 EAL: Trying to obtain current memory policy. 00:03:15.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.205 EAL: Restoring previous memory policy: 4 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was expanded by 66MB 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was shrunk by 66MB 00:03:15.205 EAL: Trying to obtain current memory policy. 00:03:15.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.205 EAL: Restoring previous memory policy: 4 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was expanded by 130MB 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was shrunk by 130MB 00:03:15.205 EAL: Trying to obtain current memory policy. 00:03:15.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.205 EAL: Restoring previous memory policy: 4 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was expanded by 258MB 00:03:15.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.205 EAL: request: mp_malloc_sync 00:03:15.205 EAL: No shared files mode enabled, IPC is disabled 00:03:15.205 EAL: Heap on socket 0 was shrunk by 258MB 00:03:15.205 EAL: Trying to obtain current memory policy. 00:03:15.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.466 EAL: Restoring previous memory policy: 4 00:03:15.466 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.466 EAL: request: mp_malloc_sync 00:03:15.466 EAL: No shared files mode enabled, IPC is disabled 00:03:15.466 EAL: Heap on socket 0 was expanded by 514MB 00:03:15.466 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.466 EAL: request: mp_malloc_sync 00:03:15.466 EAL: No shared files mode enabled, IPC is disabled 00:03:15.466 EAL: Heap on socket 0 was shrunk by 514MB 00:03:15.466 EAL: Trying to obtain current memory policy. 00:03:15.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.726 EAL: Restoring previous memory policy: 4 00:03:15.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.726 EAL: request: mp_malloc_sync 00:03:15.726 EAL: No shared files mode enabled, IPC is disabled 00:03:15.727 EAL: Heap on socket 0 was expanded by 1026MB 00:03:15.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.727 EAL: request: mp_malloc_sync 00:03:15.727 EAL: No shared files mode enabled, IPC is disabled 00:03:15.727 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:15.727 passed 00:03:15.727 00:03:15.727 Run Summary: Type Total Ran Passed Failed Inactive 00:03:15.727 suites 1 1 n/a 0 0 00:03:15.727 tests 2 2 2 0 0 00:03:15.727 asserts 497 497 497 0 n/a 00:03:15.727 00:03:15.727 Elapsed time = 0.685 seconds 00:03:15.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.727 EAL: request: mp_malloc_sync 00:03:15.727 EAL: No shared files mode enabled, IPC is disabled 00:03:15.727 EAL: Heap on socket 0 was shrunk by 2MB 00:03:15.727 EAL: No shared files mode enabled, IPC is disabled 00:03:15.727 EAL: No shared files mode enabled, IPC is disabled 00:03:15.727 EAL: No shared files mode enabled, IPC is disabled 00:03:15.727 00:03:15.727 real 0m0.827s 00:03:15.727 user 0m0.430s 00:03:15.727 sys 0m0.368s 00:03:15.727 18:19:09 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.727 18:19:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:15.727 ************************************ 00:03:15.727 END TEST env_vtophys 00:03:15.727 ************************************ 00:03:15.988 18:19:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:15.988 18:19:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.988 18:19:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.988 18:19:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:15.988 ************************************ 00:03:15.988 START TEST env_pci 00:03:15.988 ************************************ 00:03:15.988 18:19:09 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:15.988 00:03:15.988 00:03:15.988 CUnit - A unit testing framework for C - Version 2.1-3 00:03:15.988 http://cunit.sourceforge.net/ 00:03:15.988 00:03:15.988 00:03:15.988 Suite: pci 00:03:15.988 Test: pci_hook ...[2024-10-08 18:19:09.865633] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 979616 has claimed it 00:03:15.988 EAL: Cannot find device (10000:00:01.0) 00:03:15.988 EAL: Failed to attach device on primary process 00:03:15.988 passed 00:03:15.988 00:03:15.988 Run Summary: Type Total Ran Passed Failed Inactive 00:03:15.988 suites 1 1 n/a 0 0 00:03:15.988 tests 1 1 1 0 0 00:03:15.988 asserts 25 25 25 0 n/a 00:03:15.988 00:03:15.988 Elapsed time = 0.031 seconds 00:03:15.988 00:03:15.988 real 0m0.053s 00:03:15.988 user 0m0.017s 00:03:15.988 sys 0m0.035s 00:03:15.988 18:19:09 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.988 18:19:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:15.988 ************************************ 00:03:15.988 END TEST env_pci 00:03:15.988 ************************************ 00:03:15.988 18:19:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:15.988 18:19:09 env -- env/env.sh@15 -- # uname 00:03:15.988 18:19:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:15.988 18:19:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:15.988 18:19:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:15.988 18:19:09 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:15.988 18:19:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.988 18:19:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:15.988 ************************************ 00:03:15.988 START TEST env_dpdk_post_init 00:03:15.988 ************************************ 00:03:15.988 18:19:09 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:15.988 EAL: Detected CPU lcores: 128 00:03:15.988 EAL: Detected NUMA nodes: 2 00:03:15.988 EAL: Detected shared linkage of DPDK 00:03:15.988 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:16.249 EAL: Selected IOVA mode 'VA' 00:03:16.250 EAL: VFIO support initialized 00:03:16.250 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:16.250 EAL: Using IOMMU type 1 (Type 1) 00:03:16.250 EAL: Ignore mapping IO port bar(1) 00:03:16.510 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:16.510 EAL: Ignore mapping IO port bar(1) 00:03:16.770 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:16.770 EAL: Ignore mapping IO port bar(1) 00:03:16.770 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:17.030 EAL: Ignore mapping IO port bar(1) 00:03:17.030 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:17.291 EAL: Ignore mapping IO port bar(1) 00:03:17.291 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:17.553 EAL: Ignore mapping IO port bar(1) 00:03:17.553 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:17.553 EAL: Ignore mapping IO port bar(1) 00:03:17.814 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:17.814 EAL: Ignore mapping IO port bar(1) 00:03:18.076 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:18.337 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:18.337 EAL: Ignore mapping IO port bar(1) 00:03:18.337 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:18.599 EAL: Ignore mapping IO port bar(1) 00:03:18.599 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:18.860 EAL: Ignore mapping IO port bar(1) 00:03:18.860 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:19.121 EAL: Ignore mapping IO port bar(1) 00:03:19.121 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:19.121 EAL: Ignore mapping IO port bar(1) 00:03:19.381 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:19.381 EAL: Ignore mapping IO port bar(1) 00:03:19.642 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:19.642 EAL: Ignore mapping IO port bar(1) 00:03:19.902 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:19.902 EAL: Ignore mapping IO port bar(1) 00:03:19.902 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:20.163 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:20.163 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:20.163 Starting DPDK initialization... 00:03:20.163 Starting SPDK post initialization... 00:03:20.163 SPDK NVMe probe 00:03:20.163 Attaching to 0000:65:00.0 00:03:20.163 Attached to 0000:65:00.0 00:03:20.163 Cleaning up... 00:03:22.077 00:03:22.077 real 0m5.750s 00:03:22.077 user 0m0.103s 00:03:22.077 sys 0m0.202s 00:03:22.077 18:19:15 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.077 18:19:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:22.077 ************************************ 00:03:22.077 END TEST env_dpdk_post_init 00:03:22.077 ************************************ 00:03:22.077 18:19:15 env -- env/env.sh@26 -- # uname 00:03:22.077 18:19:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:22.077 18:19:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:22.077 18:19:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:22.077 18:19:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:22.077 18:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.077 ************************************ 00:03:22.077 START TEST env_mem_callbacks 00:03:22.077 ************************************ 00:03:22.077 18:19:15 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:22.077 EAL: Detected CPU lcores: 128 00:03:22.077 EAL: Detected NUMA nodes: 2 00:03:22.077 EAL: Detected shared linkage of DPDK 00:03:22.077 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.077 EAL: Selected IOVA mode 'VA' 00:03:22.077 EAL: VFIO support initialized 00:03:22.077 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.077 00:03:22.077 00:03:22.077 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.077 http://cunit.sourceforge.net/ 00:03:22.077 00:03:22.077 00:03:22.077 Suite: memory 00:03:22.077 Test: test ... 00:03:22.077 register 0x200000200000 2097152 00:03:22.077 malloc 3145728 00:03:22.077 register 0x200000400000 4194304 00:03:22.077 buf 0x200000500000 len 3145728 PASSED 00:03:22.077 malloc 64 00:03:22.077 buf 0x2000004fff40 len 64 PASSED 00:03:22.077 malloc 4194304 00:03:22.077 register 0x200000800000 6291456 00:03:22.077 buf 0x200000a00000 len 4194304 PASSED 00:03:22.077 free 0x200000500000 3145728 00:03:22.077 free 0x2000004fff40 64 00:03:22.077 unregister 0x200000400000 4194304 PASSED 00:03:22.077 free 0x200000a00000 4194304 00:03:22.077 unregister 0x200000800000 6291456 PASSED 00:03:22.077 malloc 8388608 00:03:22.077 register 0x200000400000 10485760 00:03:22.077 buf 0x200000600000 len 8388608 PASSED 00:03:22.077 free 0x200000600000 8388608 00:03:22.077 unregister 0x200000400000 10485760 PASSED 00:03:22.077 passed 00:03:22.077 00:03:22.077 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.077 suites 1 1 n/a 0 0 00:03:22.077 tests 1 1 1 0 0 00:03:22.077 asserts 15 15 15 0 n/a 00:03:22.077 00:03:22.077 Elapsed time = 0.010 seconds 00:03:22.077 00:03:22.077 real 0m0.070s 00:03:22.077 user 0m0.024s 00:03:22.077 sys 0m0.046s 00:03:22.077 18:19:15 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.077 18:19:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:22.077 ************************************ 00:03:22.077 END TEST env_mem_callbacks 00:03:22.077 ************************************ 00:03:22.077 00:03:22.077 real 0m7.526s 00:03:22.077 user 0m1.028s 00:03:22.077 sys 0m1.055s 00:03:22.077 18:19:15 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.077 18:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.077 ************************************ 00:03:22.077 END TEST env 00:03:22.077 ************************************ 00:03:22.077 18:19:15 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:22.077 18:19:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:22.077 18:19:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:22.077 18:19:15 -- common/autotest_common.sh@10 -- # set +x 00:03:22.077 ************************************ 00:03:22.077 START TEST rpc 00:03:22.077 ************************************ 00:03:22.077 18:19:16 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:22.077 * Looking for test storage... 00:03:22.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:22.077 18:19:16 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:22.077 18:19:16 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:22.077 18:19:16 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:22.341 18:19:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.341 18:19:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.341 18:19:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.341 18:19:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.341 18:19:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.341 18:19:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.341 18:19:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.341 18:19:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.341 18:19:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.341 18:19:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.341 18:19:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.341 18:19:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:22.341 18:19:16 rpc -- scripts/common.sh@345 -- # : 1 00:03:22.341 18:19:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.341 18:19:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.341 18:19:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:22.341 18:19:16 rpc -- scripts/common.sh@353 -- # local d=1 00:03:22.341 18:19:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.341 18:19:16 rpc -- scripts/common.sh@355 -- # echo 1 00:03:22.341 18:19:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.341 18:19:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:22.341 18:19:16 rpc -- scripts/common.sh@353 -- # local d=2 00:03:22.341 18:19:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.341 18:19:16 rpc -- scripts/common.sh@355 -- # echo 2 00:03:22.341 18:19:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.341 18:19:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.341 18:19:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.341 18:19:16 rpc -- scripts/common.sh@368 -- # return 0 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:22.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.341 --rc genhtml_branch_coverage=1 00:03:22.341 --rc genhtml_function_coverage=1 00:03:22.341 --rc genhtml_legend=1 00:03:22.341 --rc geninfo_all_blocks=1 00:03:22.341 --rc geninfo_unexecuted_blocks=1 00:03:22.341 00:03:22.341 ' 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:22.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.341 --rc genhtml_branch_coverage=1 00:03:22.341 --rc genhtml_function_coverage=1 00:03:22.341 --rc genhtml_legend=1 00:03:22.341 --rc geninfo_all_blocks=1 00:03:22.341 --rc geninfo_unexecuted_blocks=1 00:03:22.341 00:03:22.341 ' 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:22.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.341 --rc genhtml_branch_coverage=1 00:03:22.341 --rc genhtml_function_coverage=1 00:03:22.341 --rc genhtml_legend=1 00:03:22.341 --rc geninfo_all_blocks=1 00:03:22.341 --rc geninfo_unexecuted_blocks=1 00:03:22.341 00:03:22.341 ' 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:22.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.341 --rc genhtml_branch_coverage=1 00:03:22.341 --rc genhtml_function_coverage=1 00:03:22.341 --rc genhtml_legend=1 00:03:22.341 --rc geninfo_all_blocks=1 00:03:22.341 --rc geninfo_unexecuted_blocks=1 00:03:22.341 00:03:22.341 ' 00:03:22.341 18:19:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=981014 00:03:22.341 18:19:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:22.341 18:19:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 981014 00:03:22.341 18:19:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@831 -- # '[' -z 981014 ']' 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:22.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:22.341 18:19:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:22.341 [2024-10-08 18:19:16.286057] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:22.341 [2024-10-08 18:19:16.286125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981014 ] 00:03:22.341 [2024-10-08 18:19:16.369316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:22.601 [2024-10-08 18:19:16.463142] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:22.601 [2024-10-08 18:19:16.463205] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 981014' to capture a snapshot of events at runtime. 00:03:22.601 [2024-10-08 18:19:16.463214] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:22.601 [2024-10-08 18:19:16.463221] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:22.601 [2024-10-08 18:19:16.463228] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid981014 for offline analysis/debug. 00:03:22.601 [2024-10-08 18:19:16.464065] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.173 18:19:17 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:23.173 18:19:17 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:23.173 18:19:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.173 18:19:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.173 18:19:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:23.173 18:19:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:23.173 18:19:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:23.173 18:19:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:23.173 18:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.173 ************************************ 00:03:23.173 START TEST rpc_integrity 00:03:23.173 ************************************ 00:03:23.173 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:23.173 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:23.173 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.173 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.173 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.173 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:23.173 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:23.173 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:23.173 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:23.173 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.173 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.173 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.173 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:23.173 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:23.173 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.173 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.173 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.173 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:23.173 { 00:03:23.173 "name": "Malloc0", 00:03:23.173 "aliases": [ 00:03:23.173 "652978e0-be4a-49d3-a910-ebe0e167ed72" 00:03:23.173 ], 00:03:23.173 "product_name": "Malloc disk", 00:03:23.173 "block_size": 512, 00:03:23.173 "num_blocks": 16384, 00:03:23.173 "uuid": "652978e0-be4a-49d3-a910-ebe0e167ed72", 00:03:23.173 "assigned_rate_limits": { 00:03:23.173 "rw_ios_per_sec": 0, 00:03:23.173 "rw_mbytes_per_sec": 0, 00:03:23.173 "r_mbytes_per_sec": 0, 00:03:23.173 "w_mbytes_per_sec": 0 00:03:23.173 }, 00:03:23.173 "claimed": false, 00:03:23.173 "zoned": false, 00:03:23.173 "supported_io_types": { 00:03:23.173 "read": true, 00:03:23.173 "write": true, 00:03:23.173 "unmap": true, 00:03:23.173 "flush": true, 00:03:23.173 "reset": true, 00:03:23.173 "nvme_admin": false, 00:03:23.173 "nvme_io": false, 00:03:23.173 "nvme_io_md": false, 00:03:23.173 "write_zeroes": true, 00:03:23.173 "zcopy": true, 00:03:23.173 "get_zone_info": false, 00:03:23.173 "zone_management": false, 00:03:23.173 "zone_append": false, 00:03:23.173 "compare": false, 00:03:23.173 "compare_and_write": false, 00:03:23.173 "abort": true, 00:03:23.173 "seek_hole": false, 00:03:23.173 "seek_data": false, 00:03:23.173 "copy": true, 00:03:23.173 "nvme_iov_md": false 00:03:23.173 }, 00:03:23.173 "memory_domains": [ 00:03:23.173 { 00:03:23.173 "dma_device_id": "system", 00:03:23.173 "dma_device_type": 1 00:03:23.173 }, 00:03:23.173 { 00:03:23.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.173 "dma_device_type": 2 00:03:23.173 } 00:03:23.173 ], 00:03:23.173 "driver_specific": {} 00:03:23.173 } 00:03:23.173 ]' 00:03:23.173 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:23.434 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:23.434 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:23.434 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.434 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.435 [2024-10-08 18:19:17.260083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:23.435 [2024-10-08 18:19:17.260128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:23.435 [2024-10-08 18:19:17.260144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe01c40 00:03:23.435 [2024-10-08 18:19:17.260152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:23.435 [2024-10-08 18:19:17.261706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:23.435 [2024-10-08 18:19:17.261761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:23.435 Passthru0 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.435 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.435 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:23.435 { 00:03:23.435 "name": "Malloc0", 00:03:23.435 "aliases": [ 00:03:23.435 "652978e0-be4a-49d3-a910-ebe0e167ed72" 00:03:23.435 ], 00:03:23.435 "product_name": "Malloc disk", 00:03:23.435 "block_size": 512, 00:03:23.435 "num_blocks": 16384, 00:03:23.435 "uuid": "652978e0-be4a-49d3-a910-ebe0e167ed72", 00:03:23.435 "assigned_rate_limits": { 00:03:23.435 "rw_ios_per_sec": 0, 00:03:23.435 "rw_mbytes_per_sec": 0, 00:03:23.435 "r_mbytes_per_sec": 0, 00:03:23.435 "w_mbytes_per_sec": 0 00:03:23.435 }, 00:03:23.435 "claimed": true, 00:03:23.435 "claim_type": "exclusive_write", 00:03:23.435 "zoned": false, 00:03:23.435 "supported_io_types": { 00:03:23.435 "read": true, 00:03:23.435 "write": true, 00:03:23.435 "unmap": true, 00:03:23.435 "flush": true, 00:03:23.435 "reset": true, 00:03:23.435 "nvme_admin": false, 00:03:23.435 "nvme_io": false, 00:03:23.435 "nvme_io_md": false, 00:03:23.435 "write_zeroes": true, 00:03:23.435 "zcopy": true, 00:03:23.435 "get_zone_info": false, 00:03:23.435 "zone_management": false, 00:03:23.435 "zone_append": false, 00:03:23.435 "compare": false, 00:03:23.435 "compare_and_write": false, 00:03:23.435 "abort": true, 00:03:23.435 "seek_hole": false, 00:03:23.435 "seek_data": false, 00:03:23.435 "copy": true, 00:03:23.435 "nvme_iov_md": false 00:03:23.435 }, 00:03:23.435 "memory_domains": [ 00:03:23.435 { 00:03:23.435 "dma_device_id": "system", 00:03:23.435 "dma_device_type": 1 00:03:23.435 }, 00:03:23.435 { 00:03:23.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.435 "dma_device_type": 2 00:03:23.435 } 00:03:23.435 ], 00:03:23.435 "driver_specific": {} 00:03:23.435 }, 00:03:23.435 { 00:03:23.435 "name": "Passthru0", 00:03:23.435 "aliases": [ 00:03:23.435 "da4ee0e5-10d0-5890-b216-53f91f64862f" 00:03:23.435 ], 00:03:23.435 "product_name": "passthru", 00:03:23.435 "block_size": 512, 00:03:23.435 "num_blocks": 16384, 00:03:23.435 "uuid": "da4ee0e5-10d0-5890-b216-53f91f64862f", 00:03:23.435 "assigned_rate_limits": { 00:03:23.435 "rw_ios_per_sec": 0, 00:03:23.435 "rw_mbytes_per_sec": 0, 00:03:23.435 "r_mbytes_per_sec": 0, 00:03:23.435 "w_mbytes_per_sec": 0 00:03:23.435 }, 00:03:23.435 "claimed": false, 00:03:23.435 "zoned": false, 00:03:23.435 "supported_io_types": { 00:03:23.435 "read": true, 00:03:23.435 "write": true, 00:03:23.435 "unmap": true, 00:03:23.435 "flush": true, 00:03:23.435 "reset": true, 00:03:23.435 "nvme_admin": false, 00:03:23.435 "nvme_io": false, 00:03:23.435 "nvme_io_md": false, 00:03:23.435 "write_zeroes": true, 00:03:23.435 "zcopy": true, 00:03:23.435 "get_zone_info": false, 00:03:23.435 "zone_management": false, 00:03:23.435 "zone_append": false, 00:03:23.435 "compare": false, 00:03:23.435 "compare_and_write": false, 00:03:23.435 "abort": true, 00:03:23.435 "seek_hole": false, 00:03:23.435 "seek_data": false, 00:03:23.435 "copy": true, 00:03:23.435 "nvme_iov_md": false 00:03:23.435 }, 00:03:23.435 "memory_domains": [ 00:03:23.435 { 00:03:23.435 "dma_device_id": "system", 00:03:23.435 "dma_device_type": 1 00:03:23.435 }, 00:03:23.435 { 00:03:23.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.435 "dma_device_type": 2 00:03:23.435 } 00:03:23.435 ], 00:03:23.435 "driver_specific": { 00:03:23.435 "passthru": { 00:03:23.435 "name": "Passthru0", 00:03:23.435 "base_bdev_name": "Malloc0" 00:03:23.435 } 00:03:23.435 } 00:03:23.435 } 00:03:23.435 ]' 00:03:23.435 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:23.435 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:23.435 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.435 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.435 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.435 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.435 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:23.435 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:23.436 18:19:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:23.436 00:03:23.436 real 0m0.307s 00:03:23.436 user 0m0.187s 00:03:23.436 sys 0m0.047s 00:03:23.436 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:23.436 18:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.436 ************************************ 00:03:23.436 END TEST rpc_integrity 00:03:23.436 ************************************ 00:03:23.436 18:19:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:23.436 18:19:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:23.436 18:19:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:23.436 18:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.697 ************************************ 00:03:23.697 START TEST rpc_plugins 00:03:23.697 ************************************ 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:23.697 { 00:03:23.697 "name": "Malloc1", 00:03:23.697 "aliases": [ 00:03:23.697 "b95e8697-f050-45d0-86fb-fb7fbfb8897d" 00:03:23.697 ], 00:03:23.697 "product_name": "Malloc disk", 00:03:23.697 "block_size": 4096, 00:03:23.697 "num_blocks": 256, 00:03:23.697 "uuid": "b95e8697-f050-45d0-86fb-fb7fbfb8897d", 00:03:23.697 "assigned_rate_limits": { 00:03:23.697 "rw_ios_per_sec": 0, 00:03:23.697 "rw_mbytes_per_sec": 0, 00:03:23.697 "r_mbytes_per_sec": 0, 00:03:23.697 "w_mbytes_per_sec": 0 00:03:23.697 }, 00:03:23.697 "claimed": false, 00:03:23.697 "zoned": false, 00:03:23.697 "supported_io_types": { 00:03:23.697 "read": true, 00:03:23.697 "write": true, 00:03:23.697 "unmap": true, 00:03:23.697 "flush": true, 00:03:23.697 "reset": true, 00:03:23.697 "nvme_admin": false, 00:03:23.697 "nvme_io": false, 00:03:23.697 "nvme_io_md": false, 00:03:23.697 "write_zeroes": true, 00:03:23.697 "zcopy": true, 00:03:23.697 "get_zone_info": false, 00:03:23.697 "zone_management": false, 00:03:23.697 "zone_append": false, 00:03:23.697 "compare": false, 00:03:23.697 "compare_and_write": false, 00:03:23.697 "abort": true, 00:03:23.697 "seek_hole": false, 00:03:23.697 "seek_data": false, 00:03:23.697 "copy": true, 00:03:23.697 "nvme_iov_md": false 00:03:23.697 }, 00:03:23.697 "memory_domains": [ 00:03:23.697 { 00:03:23.697 "dma_device_id": "system", 00:03:23.697 "dma_device_type": 1 00:03:23.697 }, 00:03:23.697 { 00:03:23.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.697 "dma_device_type": 2 00:03:23.697 } 00:03:23.697 ], 00:03:23.697 "driver_specific": {} 00:03:23.697 } 00:03:23.697 ]' 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:23.697 18:19:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:23.697 00:03:23.697 real 0m0.142s 00:03:23.697 user 0m0.085s 00:03:23.697 sys 0m0.021s 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:23.697 18:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.697 ************************************ 00:03:23.697 END TEST rpc_plugins 00:03:23.697 ************************************ 00:03:23.697 18:19:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:23.697 18:19:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:23.697 18:19:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:23.697 18:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.697 ************************************ 00:03:23.697 START TEST rpc_trace_cmd_test 00:03:23.697 ************************************ 00:03:23.697 18:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:23.697 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:23.697 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:23.697 18:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.697 18:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:23.697 18:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.697 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:23.697 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid981014", 00:03:23.697 "tpoint_group_mask": "0x8", 00:03:23.697 "iscsi_conn": { 00:03:23.697 "mask": "0x2", 00:03:23.697 "tpoint_mask": "0x0" 00:03:23.697 }, 00:03:23.698 "scsi": { 00:03:23.698 "mask": "0x4", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "bdev": { 00:03:23.698 "mask": "0x8", 00:03:23.698 "tpoint_mask": "0xffffffffffffffff" 00:03:23.698 }, 00:03:23.698 "nvmf_rdma": { 00:03:23.698 "mask": "0x10", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "nvmf_tcp": { 00:03:23.698 "mask": "0x20", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "ftl": { 00:03:23.698 "mask": "0x40", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "blobfs": { 00:03:23.698 "mask": "0x80", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "dsa": { 00:03:23.698 "mask": "0x200", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "thread": { 00:03:23.698 "mask": "0x400", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "nvme_pcie": { 00:03:23.698 "mask": "0x800", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "iaa": { 00:03:23.698 "mask": "0x1000", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "nvme_tcp": { 00:03:23.698 "mask": "0x2000", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "bdev_nvme": { 00:03:23.698 "mask": "0x4000", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "sock": { 00:03:23.698 "mask": "0x8000", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "blob": { 00:03:23.698 "mask": "0x10000", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "bdev_raid": { 00:03:23.698 "mask": "0x20000", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 }, 00:03:23.698 "scheduler": { 00:03:23.698 "mask": "0x40000", 00:03:23.698 "tpoint_mask": "0x0" 00:03:23.698 } 00:03:23.698 }' 00:03:23.698 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:23.959 00:03:23.959 real 0m0.256s 00:03:23.959 user 0m0.218s 00:03:23.959 sys 0m0.028s 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:23.959 18:19:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:23.959 ************************************ 00:03:23.959 END TEST rpc_trace_cmd_test 00:03:23.959 ************************************ 00:03:24.220 18:19:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:24.220 18:19:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:24.220 18:19:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:24.220 18:19:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:24.220 18:19:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:24.220 18:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.220 ************************************ 00:03:24.220 START TEST rpc_daemon_integrity 00:03:24.220 ************************************ 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:24.220 { 00:03:24.220 "name": "Malloc2", 00:03:24.220 "aliases": [ 00:03:24.220 "7690c20c-edc9-4421-8379-63a05b5cb7bf" 00:03:24.220 ], 00:03:24.220 "product_name": "Malloc disk", 00:03:24.220 "block_size": 512, 00:03:24.220 "num_blocks": 16384, 00:03:24.220 "uuid": "7690c20c-edc9-4421-8379-63a05b5cb7bf", 00:03:24.220 "assigned_rate_limits": { 00:03:24.220 "rw_ios_per_sec": 0, 00:03:24.220 "rw_mbytes_per_sec": 0, 00:03:24.220 "r_mbytes_per_sec": 0, 00:03:24.220 "w_mbytes_per_sec": 0 00:03:24.220 }, 00:03:24.220 "claimed": false, 00:03:24.220 "zoned": false, 00:03:24.220 "supported_io_types": { 00:03:24.220 "read": true, 00:03:24.220 "write": true, 00:03:24.220 "unmap": true, 00:03:24.220 "flush": true, 00:03:24.220 "reset": true, 00:03:24.220 "nvme_admin": false, 00:03:24.220 "nvme_io": false, 00:03:24.220 "nvme_io_md": false, 00:03:24.220 "write_zeroes": true, 00:03:24.220 "zcopy": true, 00:03:24.220 "get_zone_info": false, 00:03:24.220 "zone_management": false, 00:03:24.220 "zone_append": false, 00:03:24.220 "compare": false, 00:03:24.220 "compare_and_write": false, 00:03:24.220 "abort": true, 00:03:24.220 "seek_hole": false, 00:03:24.220 "seek_data": false, 00:03:24.220 "copy": true, 00:03:24.220 "nvme_iov_md": false 00:03:24.220 }, 00:03:24.220 "memory_domains": [ 00:03:24.220 { 00:03:24.220 "dma_device_id": "system", 00:03:24.220 "dma_device_type": 1 00:03:24.220 }, 00:03:24.220 { 00:03:24.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.220 "dma_device_type": 2 00:03:24.220 } 00:03:24.220 ], 00:03:24.220 "driver_specific": {} 00:03:24.220 } 00:03:24.220 ]' 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.220 [2024-10-08 18:19:18.206665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:24.220 [2024-10-08 18:19:18.206707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:24.220 [2024-10-08 18:19:18.206726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe01ec0 00:03:24.220 [2024-10-08 18:19:18.206733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:24.220 [2024-10-08 18:19:18.208180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:24.220 [2024-10-08 18:19:18.208226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:24.220 Passthru0 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.220 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:24.220 { 00:03:24.220 "name": "Malloc2", 00:03:24.220 "aliases": [ 00:03:24.220 "7690c20c-edc9-4421-8379-63a05b5cb7bf" 00:03:24.220 ], 00:03:24.220 "product_name": "Malloc disk", 00:03:24.220 "block_size": 512, 00:03:24.220 "num_blocks": 16384, 00:03:24.220 "uuid": "7690c20c-edc9-4421-8379-63a05b5cb7bf", 00:03:24.220 "assigned_rate_limits": { 00:03:24.220 "rw_ios_per_sec": 0, 00:03:24.220 "rw_mbytes_per_sec": 0, 00:03:24.220 "r_mbytes_per_sec": 0, 00:03:24.220 "w_mbytes_per_sec": 0 00:03:24.220 }, 00:03:24.220 "claimed": true, 00:03:24.220 "claim_type": "exclusive_write", 00:03:24.220 "zoned": false, 00:03:24.220 "supported_io_types": { 00:03:24.220 "read": true, 00:03:24.220 "write": true, 00:03:24.220 "unmap": true, 00:03:24.220 "flush": true, 00:03:24.220 "reset": true, 00:03:24.220 "nvme_admin": false, 00:03:24.220 "nvme_io": false, 00:03:24.220 "nvme_io_md": false, 00:03:24.220 "write_zeroes": true, 00:03:24.220 "zcopy": true, 00:03:24.220 "get_zone_info": false, 00:03:24.220 "zone_management": false, 00:03:24.220 "zone_append": false, 00:03:24.220 "compare": false, 00:03:24.220 "compare_and_write": false, 00:03:24.220 "abort": true, 00:03:24.220 "seek_hole": false, 00:03:24.220 "seek_data": false, 00:03:24.220 "copy": true, 00:03:24.220 "nvme_iov_md": false 00:03:24.220 }, 00:03:24.220 "memory_domains": [ 00:03:24.220 { 00:03:24.220 "dma_device_id": "system", 00:03:24.220 "dma_device_type": 1 00:03:24.220 }, 00:03:24.220 { 00:03:24.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.220 "dma_device_type": 2 00:03:24.220 } 00:03:24.220 ], 00:03:24.220 "driver_specific": {} 00:03:24.220 }, 00:03:24.220 { 00:03:24.220 "name": "Passthru0", 00:03:24.220 "aliases": [ 00:03:24.220 "a92c9a0b-f44a-5874-a918-465fdf4636ce" 00:03:24.220 ], 00:03:24.220 "product_name": "passthru", 00:03:24.220 "block_size": 512, 00:03:24.220 "num_blocks": 16384, 00:03:24.220 "uuid": "a92c9a0b-f44a-5874-a918-465fdf4636ce", 00:03:24.220 "assigned_rate_limits": { 00:03:24.220 "rw_ios_per_sec": 0, 00:03:24.220 "rw_mbytes_per_sec": 0, 00:03:24.220 "r_mbytes_per_sec": 0, 00:03:24.220 "w_mbytes_per_sec": 0 00:03:24.220 }, 00:03:24.220 "claimed": false, 00:03:24.220 "zoned": false, 00:03:24.220 "supported_io_types": { 00:03:24.220 "read": true, 00:03:24.220 "write": true, 00:03:24.220 "unmap": true, 00:03:24.220 "flush": true, 00:03:24.220 "reset": true, 00:03:24.220 "nvme_admin": false, 00:03:24.220 "nvme_io": false, 00:03:24.220 "nvme_io_md": false, 00:03:24.221 "write_zeroes": true, 00:03:24.221 "zcopy": true, 00:03:24.221 "get_zone_info": false, 00:03:24.221 "zone_management": false, 00:03:24.221 "zone_append": false, 00:03:24.221 "compare": false, 00:03:24.221 "compare_and_write": false, 00:03:24.221 "abort": true, 00:03:24.221 "seek_hole": false, 00:03:24.221 "seek_data": false, 00:03:24.221 "copy": true, 00:03:24.221 "nvme_iov_md": false 00:03:24.221 }, 00:03:24.221 "memory_domains": [ 00:03:24.221 { 00:03:24.221 "dma_device_id": "system", 00:03:24.221 "dma_device_type": 1 00:03:24.221 }, 00:03:24.221 { 00:03:24.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.221 "dma_device_type": 2 00:03:24.221 } 00:03:24.221 ], 00:03:24.221 "driver_specific": { 00:03:24.221 "passthru": { 00:03:24.221 "name": "Passthru0", 00:03:24.221 "base_bdev_name": "Malloc2" 00:03:24.221 } 00:03:24.221 } 00:03:24.221 } 00:03:24.221 ]' 00:03:24.221 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:24.480 00:03:24.480 real 0m0.308s 00:03:24.480 user 0m0.198s 00:03:24.480 sys 0m0.039s 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:24.480 18:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.480 ************************************ 00:03:24.480 END TEST rpc_daemon_integrity 00:03:24.480 ************************************ 00:03:24.480 18:19:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:24.480 18:19:18 rpc -- rpc/rpc.sh@84 -- # killprocess 981014 00:03:24.480 18:19:18 rpc -- common/autotest_common.sh@950 -- # '[' -z 981014 ']' 00:03:24.480 18:19:18 rpc -- common/autotest_common.sh@954 -- # kill -0 981014 00:03:24.480 18:19:18 rpc -- common/autotest_common.sh@955 -- # uname 00:03:24.480 18:19:18 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:24.480 18:19:18 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 981014 00:03:24.480 18:19:18 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:24.480 18:19:18 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:24.481 18:19:18 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 981014' 00:03:24.481 killing process with pid 981014 00:03:24.481 18:19:18 rpc -- common/autotest_common.sh@969 -- # kill 981014 00:03:24.481 18:19:18 rpc -- common/autotest_common.sh@974 -- # wait 981014 00:03:24.740 00:03:24.740 real 0m2.715s 00:03:24.740 user 0m3.427s 00:03:24.740 sys 0m0.842s 00:03:24.740 18:19:18 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:24.740 18:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.740 ************************************ 00:03:24.740 END TEST rpc 00:03:24.740 ************************************ 00:03:24.740 18:19:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:24.740 18:19:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:24.740 18:19:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:24.740 18:19:18 -- common/autotest_common.sh@10 -- # set +x 00:03:25.001 ************************************ 00:03:25.001 START TEST skip_rpc 00:03:25.001 ************************************ 00:03:25.001 18:19:18 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:25.001 * Looking for test storage... 00:03:25.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:25.001 18:19:18 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:25.001 18:19:18 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:25.002 18:19:18 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:25.002 18:19:19 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.002 18:19:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:25.002 18:19:19 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.002 18:19:19 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:25.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.002 --rc genhtml_branch_coverage=1 00:03:25.002 --rc genhtml_function_coverage=1 00:03:25.002 --rc genhtml_legend=1 00:03:25.002 --rc geninfo_all_blocks=1 00:03:25.002 --rc geninfo_unexecuted_blocks=1 00:03:25.002 00:03:25.002 ' 00:03:25.002 18:19:19 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:25.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.002 --rc genhtml_branch_coverage=1 00:03:25.002 --rc genhtml_function_coverage=1 00:03:25.002 --rc genhtml_legend=1 00:03:25.002 --rc geninfo_all_blocks=1 00:03:25.002 --rc geninfo_unexecuted_blocks=1 00:03:25.002 00:03:25.002 ' 00:03:25.002 18:19:19 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:25.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.002 --rc genhtml_branch_coverage=1 00:03:25.002 --rc genhtml_function_coverage=1 00:03:25.002 --rc genhtml_legend=1 00:03:25.002 --rc geninfo_all_blocks=1 00:03:25.002 --rc geninfo_unexecuted_blocks=1 00:03:25.002 00:03:25.002 ' 00:03:25.002 18:19:19 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:25.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.002 --rc genhtml_branch_coverage=1 00:03:25.002 --rc genhtml_function_coverage=1 00:03:25.002 --rc genhtml_legend=1 00:03:25.002 --rc geninfo_all_blocks=1 00:03:25.002 --rc geninfo_unexecuted_blocks=1 00:03:25.002 00:03:25.002 ' 00:03:25.002 18:19:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:25.002 18:19:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:25.002 18:19:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:25.002 18:19:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.002 18:19:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.002 18:19:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.002 ************************************ 00:03:25.002 START TEST skip_rpc 00:03:25.002 ************************************ 00:03:25.002 18:19:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:25.263 18:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=981863 00:03:25.263 18:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:25.263 18:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:25.263 18:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:25.263 [2024-10-08 18:19:19.117055] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:25.263 [2024-10-08 18:19:19.117114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981863 ] 00:03:25.263 [2024-10-08 18:19:19.197914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.263 [2024-10-08 18:19:19.292289] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 981863 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 981863 ']' 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 981863 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 981863 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 981863' 00:03:30.558 killing process with pid 981863 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 981863 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 981863 00:03:30.558 00:03:30.558 real 0m5.281s 00:03:30.558 user 0m5.031s 00:03:30.558 sys 0m0.300s 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.558 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.558 ************************************ 00:03:30.558 END TEST skip_rpc 00:03:30.558 ************************************ 00:03:30.558 18:19:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:30.558 18:19:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.558 18:19:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.558 18:19:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.558 ************************************ 00:03:30.558 START TEST skip_rpc_with_json 00:03:30.558 ************************************ 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=982904 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 982904 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 982904 ']' 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:30.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:30.558 18:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:30.558 [2024-10-08 18:19:24.475120] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:30.558 [2024-10-08 18:19:24.475173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982904 ] 00:03:30.558 [2024-10-08 18:19:24.554391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.558 [2024-10-08 18:19:24.612249] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:31.499 [2024-10-08 18:19:25.257683] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:31.499 request: 00:03:31.499 { 00:03:31.499 "trtype": "tcp", 00:03:31.499 "method": "nvmf_get_transports", 00:03:31.499 "req_id": 1 00:03:31.499 } 00:03:31.499 Got JSON-RPC error response 00:03:31.499 response: 00:03:31.499 { 00:03:31.499 "code": -19, 00:03:31.499 "message": "No such device" 00:03:31.499 } 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:31.499 [2024-10-08 18:19:25.269782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:31.499 18:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:31.499 { 00:03:31.499 "subsystems": [ 00:03:31.499 { 00:03:31.499 "subsystem": "fsdev", 00:03:31.499 "config": [ 00:03:31.499 { 00:03:31.499 "method": "fsdev_set_opts", 00:03:31.499 "params": { 00:03:31.499 "fsdev_io_pool_size": 65535, 00:03:31.499 "fsdev_io_cache_size": 256 00:03:31.499 } 00:03:31.499 } 00:03:31.499 ] 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "subsystem": "vfio_user_target", 00:03:31.499 "config": null 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "subsystem": "keyring", 00:03:31.499 "config": [] 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "subsystem": "iobuf", 00:03:31.499 "config": [ 00:03:31.499 { 00:03:31.499 "method": "iobuf_set_options", 00:03:31.499 "params": { 00:03:31.499 "small_pool_count": 8192, 00:03:31.499 "large_pool_count": 1024, 00:03:31.499 "small_bufsize": 8192, 00:03:31.499 "large_bufsize": 135168 00:03:31.499 } 00:03:31.499 } 00:03:31.499 ] 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "subsystem": "sock", 00:03:31.499 "config": [ 00:03:31.499 { 00:03:31.499 "method": "sock_set_default_impl", 00:03:31.499 "params": { 00:03:31.499 "impl_name": "posix" 00:03:31.499 } 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "method": "sock_impl_set_options", 00:03:31.499 "params": { 00:03:31.499 "impl_name": "ssl", 00:03:31.499 "recv_buf_size": 4096, 00:03:31.499 "send_buf_size": 4096, 00:03:31.499 "enable_recv_pipe": true, 00:03:31.499 "enable_quickack": false, 00:03:31.499 "enable_placement_id": 0, 00:03:31.499 "enable_zerocopy_send_server": true, 00:03:31.499 "enable_zerocopy_send_client": false, 00:03:31.499 "zerocopy_threshold": 0, 00:03:31.499 "tls_version": 0, 00:03:31.499 "enable_ktls": false 00:03:31.499 } 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "method": "sock_impl_set_options", 00:03:31.499 "params": { 00:03:31.499 "impl_name": "posix", 00:03:31.499 "recv_buf_size": 2097152, 00:03:31.499 "send_buf_size": 2097152, 00:03:31.499 "enable_recv_pipe": true, 00:03:31.499 "enable_quickack": false, 00:03:31.499 "enable_placement_id": 0, 00:03:31.499 "enable_zerocopy_send_server": true, 00:03:31.499 "enable_zerocopy_send_client": false, 00:03:31.499 "zerocopy_threshold": 0, 00:03:31.499 "tls_version": 0, 00:03:31.499 "enable_ktls": false 00:03:31.499 } 00:03:31.499 } 00:03:31.499 ] 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "subsystem": "vmd", 00:03:31.499 "config": [] 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "subsystem": "accel", 00:03:31.499 "config": [ 00:03:31.499 { 00:03:31.499 "method": "accel_set_options", 00:03:31.499 "params": { 00:03:31.499 "small_cache_size": 128, 00:03:31.499 "large_cache_size": 16, 00:03:31.499 "task_count": 2048, 00:03:31.499 "sequence_count": 2048, 00:03:31.499 "buf_count": 2048 00:03:31.499 } 00:03:31.499 } 00:03:31.499 ] 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "subsystem": "bdev", 00:03:31.499 "config": [ 00:03:31.499 { 00:03:31.499 "method": "bdev_set_options", 00:03:31.499 "params": { 00:03:31.499 "bdev_io_pool_size": 65535, 00:03:31.499 "bdev_io_cache_size": 256, 00:03:31.499 "bdev_auto_examine": true, 00:03:31.499 "iobuf_small_cache_size": 128, 00:03:31.499 "iobuf_large_cache_size": 16 00:03:31.499 } 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "method": "bdev_raid_set_options", 00:03:31.499 "params": { 00:03:31.499 "process_window_size_kb": 1024, 00:03:31.499 "process_max_bandwidth_mb_sec": 0 00:03:31.499 } 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "method": "bdev_iscsi_set_options", 00:03:31.499 "params": { 00:03:31.499 "timeout_sec": 30 00:03:31.499 } 00:03:31.499 }, 00:03:31.499 { 00:03:31.499 "method": "bdev_nvme_set_options", 00:03:31.499 "params": { 00:03:31.499 "action_on_timeout": "none", 00:03:31.499 "timeout_us": 0, 00:03:31.499 "timeout_admin_us": 0, 00:03:31.500 "keep_alive_timeout_ms": 10000, 00:03:31.500 "arbitration_burst": 0, 00:03:31.500 "low_priority_weight": 0, 00:03:31.500 "medium_priority_weight": 0, 00:03:31.500 "high_priority_weight": 0, 00:03:31.500 "nvme_adminq_poll_period_us": 10000, 00:03:31.500 "nvme_ioq_poll_period_us": 0, 00:03:31.500 "io_queue_requests": 0, 00:03:31.500 "delay_cmd_submit": true, 00:03:31.500 "transport_retry_count": 4, 00:03:31.500 "bdev_retry_count": 3, 00:03:31.500 "transport_ack_timeout": 0, 00:03:31.500 "ctrlr_loss_timeout_sec": 0, 00:03:31.500 "reconnect_delay_sec": 0, 00:03:31.500 "fast_io_fail_timeout_sec": 0, 00:03:31.500 "disable_auto_failback": false, 00:03:31.500 "generate_uuids": false, 00:03:31.500 "transport_tos": 0, 00:03:31.500 "nvme_error_stat": false, 00:03:31.500 "rdma_srq_size": 0, 00:03:31.500 "io_path_stat": false, 00:03:31.500 "allow_accel_sequence": false, 00:03:31.500 "rdma_max_cq_size": 0, 00:03:31.500 "rdma_cm_event_timeout_ms": 0, 00:03:31.500 "dhchap_digests": [ 00:03:31.500 "sha256", 00:03:31.500 "sha384", 00:03:31.500 "sha512" 00:03:31.500 ], 00:03:31.500 "dhchap_dhgroups": [ 00:03:31.500 "null", 00:03:31.500 "ffdhe2048", 00:03:31.500 "ffdhe3072", 00:03:31.500 "ffdhe4096", 00:03:31.500 "ffdhe6144", 00:03:31.500 "ffdhe8192" 00:03:31.500 ] 00:03:31.500 } 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "method": "bdev_nvme_set_hotplug", 00:03:31.500 "params": { 00:03:31.500 "period_us": 100000, 00:03:31.500 "enable": false 00:03:31.500 } 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "method": "bdev_wait_for_examine" 00:03:31.500 } 00:03:31.500 ] 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "subsystem": "scsi", 00:03:31.500 "config": null 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "subsystem": "scheduler", 00:03:31.500 "config": [ 00:03:31.500 { 00:03:31.500 "method": "framework_set_scheduler", 00:03:31.500 "params": { 00:03:31.500 "name": "static" 00:03:31.500 } 00:03:31.500 } 00:03:31.500 ] 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "subsystem": "vhost_scsi", 00:03:31.500 "config": [] 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "subsystem": "vhost_blk", 00:03:31.500 "config": [] 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "subsystem": "ublk", 00:03:31.500 "config": [] 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "subsystem": "nbd", 00:03:31.500 "config": [] 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "subsystem": "nvmf", 00:03:31.500 "config": [ 00:03:31.500 { 00:03:31.500 "method": "nvmf_set_config", 00:03:31.500 "params": { 00:03:31.500 "discovery_filter": "match_any", 00:03:31.500 "admin_cmd_passthru": { 00:03:31.500 "identify_ctrlr": false 00:03:31.500 }, 00:03:31.500 "dhchap_digests": [ 00:03:31.500 "sha256", 00:03:31.500 "sha384", 00:03:31.500 "sha512" 00:03:31.500 ], 00:03:31.500 "dhchap_dhgroups": [ 00:03:31.500 "null", 00:03:31.500 "ffdhe2048", 00:03:31.500 "ffdhe3072", 00:03:31.500 "ffdhe4096", 00:03:31.500 "ffdhe6144", 00:03:31.500 "ffdhe8192" 00:03:31.500 ] 00:03:31.500 } 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "method": "nvmf_set_max_subsystems", 00:03:31.500 "params": { 00:03:31.500 "max_subsystems": 1024 00:03:31.500 } 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "method": "nvmf_set_crdt", 00:03:31.500 "params": { 00:03:31.500 "crdt1": 0, 00:03:31.500 "crdt2": 0, 00:03:31.500 "crdt3": 0 00:03:31.500 } 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "method": "nvmf_create_transport", 00:03:31.500 "params": { 00:03:31.500 "trtype": "TCP", 00:03:31.500 "max_queue_depth": 128, 00:03:31.500 "max_io_qpairs_per_ctrlr": 127, 00:03:31.500 "in_capsule_data_size": 4096, 00:03:31.500 "max_io_size": 131072, 00:03:31.500 "io_unit_size": 131072, 00:03:31.500 "max_aq_depth": 128, 00:03:31.500 "num_shared_buffers": 511, 00:03:31.500 "buf_cache_size": 4294967295, 00:03:31.500 "dif_insert_or_strip": false, 00:03:31.500 "zcopy": false, 00:03:31.500 "c2h_success": true, 00:03:31.500 "sock_priority": 0, 00:03:31.500 "abort_timeout_sec": 1, 00:03:31.500 "ack_timeout": 0, 00:03:31.500 "data_wr_pool_size": 0 00:03:31.500 } 00:03:31.500 } 00:03:31.500 ] 00:03:31.500 }, 00:03:31.500 { 00:03:31.500 "subsystem": "iscsi", 00:03:31.500 "config": [ 00:03:31.500 { 00:03:31.500 "method": "iscsi_set_options", 00:03:31.500 "params": { 00:03:31.500 "node_base": "iqn.2016-06.io.spdk", 00:03:31.500 "max_sessions": 128, 00:03:31.500 "max_connections_per_session": 2, 00:03:31.500 "max_queue_depth": 64, 00:03:31.500 "default_time2wait": 2, 00:03:31.500 "default_time2retain": 20, 00:03:31.500 "first_burst_length": 8192, 00:03:31.500 "immediate_data": true, 00:03:31.500 "allow_duplicated_isid": false, 00:03:31.500 "error_recovery_level": 0, 00:03:31.500 "nop_timeout": 60, 00:03:31.500 "nop_in_interval": 30, 00:03:31.500 "disable_chap": false, 00:03:31.500 "require_chap": false, 00:03:31.500 "mutual_chap": false, 00:03:31.500 "chap_group": 0, 00:03:31.500 "max_large_datain_per_connection": 64, 00:03:31.500 "max_r2t_per_connection": 4, 00:03:31.500 "pdu_pool_size": 36864, 00:03:31.500 "immediate_data_pool_size": 16384, 00:03:31.500 "data_out_pool_size": 2048 00:03:31.500 } 00:03:31.500 } 00:03:31.500 ] 00:03:31.500 } 00:03:31.500 ] 00:03:31.500 } 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 982904 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 982904 ']' 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 982904 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 982904 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 982904' 00:03:31.500 killing process with pid 982904 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 982904 00:03:31.500 18:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 982904 00:03:31.762 18:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=983242 00:03:31.762 18:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:31.762 18:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 983242 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 983242 ']' 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 983242 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 983242 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 983242' 00:03:37.045 killing process with pid 983242 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 983242 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 983242 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:37.045 00:03:37.045 real 0m6.574s 00:03:37.045 user 0m6.456s 00:03:37.045 sys 0m0.582s 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.045 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:37.045 ************************************ 00:03:37.045 END TEST skip_rpc_with_json 00:03:37.045 ************************************ 00:03:37.045 18:19:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:37.045 18:19:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.045 18:19:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.045 18:19:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.045 ************************************ 00:03:37.045 START TEST skip_rpc_with_delay 00:03:37.045 ************************************ 00:03:37.045 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:37.045 18:19:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.045 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:37.045 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.045 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.045 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:37.045 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.045 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:37.046 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.046 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:37.046 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.046 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:37.046 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.307 [2024-10-08 18:19:31.130524] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:37.307 [2024-10-08 18:19:31.130599] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:37.307 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:37.307 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:37.307 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:37.307 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:37.307 00:03:37.307 real 0m0.077s 00:03:37.307 user 0m0.046s 00:03:37.307 sys 0m0.031s 00:03:37.307 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.307 18:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:37.307 ************************************ 00:03:37.307 END TEST skip_rpc_with_delay 00:03:37.307 ************************************ 00:03:37.307 18:19:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:37.307 18:19:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:37.307 18:19:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:37.307 18:19:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.307 18:19:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.307 18:19:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.307 ************************************ 00:03:37.307 START TEST exit_on_failed_rpc_init 00:03:37.307 ************************************ 00:03:37.307 18:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:37.307 18:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=984307 00:03:37.307 18:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 984307 00:03:37.307 18:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:37.307 18:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 984307 ']' 00:03:37.307 18:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.307 18:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:37.307 18:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.307 18:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:37.307 18:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:37.307 [2024-10-08 18:19:31.284887] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:37.307 [2024-10-08 18:19:31.284942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984307 ] 00:03:37.567 [2024-10-08 18:19:31.366141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.567 [2024-10-08 18:19:31.427282] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:38.137 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.137 [2024-10-08 18:19:32.123390] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:38.137 [2024-10-08 18:19:32.123459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984491 ] 00:03:38.397 [2024-10-08 18:19:32.202358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.397 [2024-10-08 18:19:32.267381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:03:38.397 [2024-10-08 18:19:32.267445] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:38.397 [2024-10-08 18:19:32.267454] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:38.397 [2024-10-08 18:19:32.267461] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 984307 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 984307 ']' 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 984307 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 984307 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 984307' 00:03:38.397 killing process with pid 984307 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 984307 00:03:38.397 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 984307 00:03:38.656 00:03:38.656 real 0m1.367s 00:03:38.656 user 0m1.618s 00:03:38.656 sys 0m0.380s 00:03:38.656 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.656 18:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:38.656 ************************************ 00:03:38.656 END TEST exit_on_failed_rpc_init 00:03:38.656 ************************************ 00:03:38.656 18:19:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:38.656 00:03:38.656 real 0m13.818s 00:03:38.656 user 0m13.377s 00:03:38.656 sys 0m1.615s 00:03:38.656 18:19:32 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.656 18:19:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.656 ************************************ 00:03:38.656 END TEST skip_rpc 00:03:38.656 ************************************ 00:03:38.656 18:19:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:38.656 18:19:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.656 18:19:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.656 18:19:32 -- common/autotest_common.sh@10 -- # set +x 00:03:38.656 ************************************ 00:03:38.656 START TEST rpc_client 00:03:38.656 ************************************ 00:03:38.656 18:19:32 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:38.917 * Looking for test storage... 00:03:38.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:38.917 18:19:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:38.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.917 --rc genhtml_branch_coverage=1 00:03:38.917 --rc genhtml_function_coverage=1 00:03:38.917 --rc genhtml_legend=1 00:03:38.917 --rc geninfo_all_blocks=1 00:03:38.917 --rc geninfo_unexecuted_blocks=1 00:03:38.917 00:03:38.917 ' 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:38.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.917 --rc genhtml_branch_coverage=1 00:03:38.917 --rc genhtml_function_coverage=1 00:03:38.917 --rc genhtml_legend=1 00:03:38.917 --rc geninfo_all_blocks=1 00:03:38.917 --rc geninfo_unexecuted_blocks=1 00:03:38.917 00:03:38.917 ' 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:38.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.917 --rc genhtml_branch_coverage=1 00:03:38.917 --rc genhtml_function_coverage=1 00:03:38.917 --rc genhtml_legend=1 00:03:38.917 --rc geninfo_all_blocks=1 00:03:38.917 --rc geninfo_unexecuted_blocks=1 00:03:38.917 00:03:38.917 ' 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:38.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.917 --rc genhtml_branch_coverage=1 00:03:38.917 --rc genhtml_function_coverage=1 00:03:38.917 --rc genhtml_legend=1 00:03:38.917 --rc geninfo_all_blocks=1 00:03:38.917 --rc geninfo_unexecuted_blocks=1 00:03:38.917 00:03:38.917 ' 00:03:38.917 18:19:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:38.917 OK 00:03:38.917 18:19:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:38.917 00:03:38.917 real 0m0.220s 00:03:38.917 user 0m0.144s 00:03:38.917 sys 0m0.089s 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.917 18:19:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:38.917 ************************************ 00:03:38.917 END TEST rpc_client 00:03:38.917 ************************************ 00:03:38.917 18:19:32 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:38.917 18:19:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.917 18:19:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.917 18:19:32 -- common/autotest_common.sh@10 -- # set +x 00:03:39.208 ************************************ 00:03:39.208 START TEST json_config 00:03:39.208 ************************************ 00:03:39.208 18:19:33 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:39.208 18:19:33 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:39.208 18:19:33 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:03:39.208 18:19:33 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:39.208 18:19:33 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:39.208 18:19:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.208 18:19:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.208 18:19:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.208 18:19:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.208 18:19:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.208 18:19:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.208 18:19:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.208 18:19:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.208 18:19:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.208 18:19:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.208 18:19:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.208 18:19:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:39.208 18:19:33 json_config -- scripts/common.sh@345 -- # : 1 00:03:39.208 18:19:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.208 18:19:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.208 18:19:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:39.208 18:19:33 json_config -- scripts/common.sh@353 -- # local d=1 00:03:39.208 18:19:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.208 18:19:33 json_config -- scripts/common.sh@355 -- # echo 1 00:03:39.208 18:19:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.208 18:19:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:39.208 18:19:33 json_config -- scripts/common.sh@353 -- # local d=2 00:03:39.208 18:19:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.208 18:19:33 json_config -- scripts/common.sh@355 -- # echo 2 00:03:39.208 18:19:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.208 18:19:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.208 18:19:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.208 18:19:33 json_config -- scripts/common.sh@368 -- # return 0 00:03:39.208 18:19:33 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.208 18:19:33 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:39.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.208 --rc genhtml_branch_coverage=1 00:03:39.208 --rc genhtml_function_coverage=1 00:03:39.208 --rc genhtml_legend=1 00:03:39.208 --rc geninfo_all_blocks=1 00:03:39.208 --rc geninfo_unexecuted_blocks=1 00:03:39.208 00:03:39.208 ' 00:03:39.208 18:19:33 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:39.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.208 --rc genhtml_branch_coverage=1 00:03:39.208 --rc genhtml_function_coverage=1 00:03:39.208 --rc genhtml_legend=1 00:03:39.208 --rc geninfo_all_blocks=1 00:03:39.208 --rc geninfo_unexecuted_blocks=1 00:03:39.208 00:03:39.208 ' 00:03:39.208 18:19:33 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:39.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.208 --rc genhtml_branch_coverage=1 00:03:39.208 --rc genhtml_function_coverage=1 00:03:39.208 --rc genhtml_legend=1 00:03:39.208 --rc geninfo_all_blocks=1 00:03:39.208 --rc geninfo_unexecuted_blocks=1 00:03:39.208 00:03:39.208 ' 00:03:39.208 18:19:33 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:39.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.208 --rc genhtml_branch_coverage=1 00:03:39.208 --rc genhtml_function_coverage=1 00:03:39.208 --rc genhtml_legend=1 00:03:39.208 --rc geninfo_all_blocks=1 00:03:39.208 --rc geninfo_unexecuted_blocks=1 00:03:39.208 00:03:39.208 ' 00:03:39.208 18:19:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:39.208 18:19:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:39.209 18:19:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:39.209 18:19:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.209 18:19:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.209 18:19:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.209 18:19:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.209 18:19:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.209 18:19:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.209 18:19:33 json_config -- paths/export.sh@5 -- # export PATH 00:03:39.209 18:19:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@51 -- # : 0 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:39.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:39.209 18:19:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:39.209 INFO: JSON configuration test init 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:39.209 18:19:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:39.209 18:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:39.209 18:19:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:39.209 18:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.209 18:19:33 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:39.209 18:19:33 json_config -- json_config/common.sh@9 -- # local app=target 00:03:39.209 18:19:33 json_config -- json_config/common.sh@10 -- # shift 00:03:39.209 18:19:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:39.209 18:19:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:39.209 18:19:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:39.209 18:19:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:39.209 18:19:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:39.209 18:19:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=984780 00:03:39.209 18:19:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:39.209 Waiting for target to run... 00:03:39.209 18:19:33 json_config -- json_config/common.sh@25 -- # waitforlisten 984780 /var/tmp/spdk_tgt.sock 00:03:39.209 18:19:33 json_config -- common/autotest_common.sh@831 -- # '[' -z 984780 ']' 00:03:39.209 18:19:33 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:39.209 18:19:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:39.209 18:19:33 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:39.209 18:19:33 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:39.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:39.209 18:19:33 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:39.209 18:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.469 [2024-10-08 18:19:33.278106] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:39.469 [2024-10-08 18:19:33.278156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984780 ] 00:03:39.730 [2024-10-08 18:19:33.628153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.730 [2024-10-08 18:19:33.687020] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.300 18:19:34 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:40.300 18:19:34 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:40.300 18:19:34 json_config -- json_config/common.sh@26 -- # echo '' 00:03:40.300 00:03:40.300 18:19:34 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:40.300 18:19:34 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:40.300 18:19:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.300 18:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.300 18:19:34 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:40.300 18:19:34 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:40.300 18:19:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:40.300 18:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.300 18:19:34 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:40.300 18:19:34 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:40.300 18:19:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:40.869 18:19:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.869 18:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:40.869 18:19:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@54 -- # sort 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:40.869 18:19:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:40.869 18:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:40.869 18:19:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.869 18:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:40.869 18:19:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:40.869 18:19:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:41.130 MallocForNvmf0 00:03:41.130 18:19:35 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:41.130 18:19:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:41.389 MallocForNvmf1 00:03:41.389 18:19:35 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:41.389 18:19:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:41.389 [2024-10-08 18:19:35.437399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:41.649 18:19:35 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:41.650 18:19:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:41.650 18:19:35 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:41.650 18:19:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:41.910 18:19:35 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:41.910 18:19:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:42.171 18:19:35 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:42.171 18:19:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:42.171 [2024-10-08 18:19:36.107452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:42.171 18:19:36 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:42.171 18:19:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:42.171 18:19:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.171 18:19:36 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:42.171 18:19:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:42.171 18:19:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.171 18:19:36 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:42.171 18:19:36 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:42.171 18:19:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:42.544 MallocBdevForConfigChangeCheck 00:03:42.544 18:19:36 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:42.544 18:19:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:42.544 18:19:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.544 18:19:36 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:42.544 18:19:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:42.848 18:19:36 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:42.848 INFO: shutting down applications... 00:03:42.848 18:19:36 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:42.848 18:19:36 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:42.848 18:19:36 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:42.848 18:19:36 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:43.141 Calling clear_iscsi_subsystem 00:03:43.141 Calling clear_nvmf_subsystem 00:03:43.141 Calling clear_nbd_subsystem 00:03:43.141 Calling clear_ublk_subsystem 00:03:43.141 Calling clear_vhost_blk_subsystem 00:03:43.141 Calling clear_vhost_scsi_subsystem 00:03:43.141 Calling clear_bdev_subsystem 00:03:43.141 18:19:37 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:43.141 18:19:37 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:43.141 18:19:37 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:43.141 18:19:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:43.141 18:19:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:43.141 18:19:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:43.713 18:19:37 json_config -- json_config/json_config.sh@352 -- # break 00:03:43.713 18:19:37 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:43.713 18:19:37 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:43.713 18:19:37 json_config -- json_config/common.sh@31 -- # local app=target 00:03:43.713 18:19:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:43.713 18:19:37 json_config -- json_config/common.sh@35 -- # [[ -n 984780 ]] 00:03:43.713 18:19:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 984780 00:03:43.713 18:19:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:43.713 18:19:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:43.713 18:19:37 json_config -- json_config/common.sh@41 -- # kill -0 984780 00:03:43.713 18:19:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:43.974 18:19:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:43.974 18:19:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:43.974 18:19:38 json_config -- json_config/common.sh@41 -- # kill -0 984780 00:03:43.974 18:19:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:43.974 18:19:38 json_config -- json_config/common.sh@43 -- # break 00:03:43.974 18:19:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:43.974 18:19:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:43.974 SPDK target shutdown done 00:03:43.974 18:19:38 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:43.974 INFO: relaunching applications... 00:03:43.974 18:19:38 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:43.974 18:19:38 json_config -- json_config/common.sh@9 -- # local app=target 00:03:43.974 18:19:38 json_config -- json_config/common.sh@10 -- # shift 00:03:43.974 18:19:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:43.974 18:19:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:43.974 18:19:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:43.974 18:19:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:43.974 18:19:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:43.974 18:19:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=985923 00:03:43.974 18:19:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:43.974 Waiting for target to run... 00:03:43.974 18:19:38 json_config -- json_config/common.sh@25 -- # waitforlisten 985923 /var/tmp/spdk_tgt.sock 00:03:43.974 18:19:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:43.974 18:19:38 json_config -- common/autotest_common.sh@831 -- # '[' -z 985923 ']' 00:03:43.974 18:19:38 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:43.974 18:19:38 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:43.974 18:19:38 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:43.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:43.974 18:19:38 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:43.974 18:19:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.235 [2024-10-08 18:19:38.084880] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:44.235 [2024-10-08 18:19:38.084948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985923 ] 00:03:44.495 [2024-10-08 18:19:38.385375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.495 [2024-10-08 18:19:38.439234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.067 [2024-10-08 18:19:38.943856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:45.067 [2024-10-08 18:19:38.976228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:45.067 18:19:39 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:45.067 18:19:39 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:45.067 18:19:39 json_config -- json_config/common.sh@26 -- # echo '' 00:03:45.067 00:03:45.067 18:19:39 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:45.067 18:19:39 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:45.067 INFO: Checking if target configuration is the same... 00:03:45.067 18:19:39 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.067 18:19:39 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:45.067 18:19:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:45.067 + '[' 2 -ne 2 ']' 00:03:45.067 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:45.067 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:45.067 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.067 +++ basename /dev/fd/62 00:03:45.067 ++ mktemp /tmp/62.XXX 00:03:45.067 + tmp_file_1=/tmp/62.anG 00:03:45.067 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.067 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:45.067 + tmp_file_2=/tmp/spdk_tgt_config.json.XB2 00:03:45.067 + ret=0 00:03:45.067 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:45.328 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:45.588 + diff -u /tmp/62.anG /tmp/spdk_tgt_config.json.XB2 00:03:45.588 + echo 'INFO: JSON config files are the same' 00:03:45.588 INFO: JSON config files are the same 00:03:45.588 + rm /tmp/62.anG /tmp/spdk_tgt_config.json.XB2 00:03:45.588 + exit 0 00:03:45.588 18:19:39 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:45.589 18:19:39 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:45.589 INFO: changing configuration and checking if this can be detected... 00:03:45.589 18:19:39 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:45.589 18:19:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:45.589 18:19:39 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.589 18:19:39 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:45.589 18:19:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:45.589 + '[' 2 -ne 2 ']' 00:03:45.589 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:45.589 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:45.589 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.589 +++ basename /dev/fd/62 00:03:45.589 ++ mktemp /tmp/62.XXX 00:03:45.589 + tmp_file_1=/tmp/62.9TJ 00:03:45.589 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.589 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:45.589 + tmp_file_2=/tmp/spdk_tgt_config.json.Qvn 00:03:45.589 + ret=0 00:03:45.589 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:45.849 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:46.109 + diff -u /tmp/62.9TJ /tmp/spdk_tgt_config.json.Qvn 00:03:46.109 + ret=1 00:03:46.109 + echo '=== Start of file: /tmp/62.9TJ ===' 00:03:46.109 + cat /tmp/62.9TJ 00:03:46.109 + echo '=== End of file: /tmp/62.9TJ ===' 00:03:46.109 + echo '' 00:03:46.109 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Qvn ===' 00:03:46.109 + cat /tmp/spdk_tgt_config.json.Qvn 00:03:46.109 + echo '=== End of file: /tmp/spdk_tgt_config.json.Qvn ===' 00:03:46.109 + echo '' 00:03:46.109 + rm /tmp/62.9TJ /tmp/spdk_tgt_config.json.Qvn 00:03:46.109 + exit 1 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:46.109 INFO: configuration change detected. 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:46.109 18:19:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:46.109 18:19:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@324 -- # [[ -n 985923 ]] 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:46.109 18:19:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:46.109 18:19:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:46.109 18:19:39 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:46.109 18:19:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:46.109 18:19:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.109 18:19:40 json_config -- json_config/json_config.sh@330 -- # killprocess 985923 00:03:46.109 18:19:40 json_config -- common/autotest_common.sh@950 -- # '[' -z 985923 ']' 00:03:46.109 18:19:40 json_config -- common/autotest_common.sh@954 -- # kill -0 985923 00:03:46.109 18:19:40 json_config -- common/autotest_common.sh@955 -- # uname 00:03:46.109 18:19:40 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:46.109 18:19:40 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 985923 00:03:46.109 18:19:40 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:46.109 18:19:40 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:46.109 18:19:40 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 985923' 00:03:46.109 killing process with pid 985923 00:03:46.109 18:19:40 json_config -- common/autotest_common.sh@969 -- # kill 985923 00:03:46.109 18:19:40 json_config -- common/autotest_common.sh@974 -- # wait 985923 00:03:46.370 18:19:40 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:46.370 18:19:40 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:46.370 18:19:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:46.370 18:19:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.370 18:19:40 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:46.370 18:19:40 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:46.370 INFO: Success 00:03:46.370 00:03:46.370 real 0m7.397s 00:03:46.370 user 0m8.904s 00:03:46.370 sys 0m1.989s 00:03:46.370 18:19:40 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.370 18:19:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.370 ************************************ 00:03:46.370 END TEST json_config 00:03:46.370 ************************************ 00:03:46.631 18:19:40 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:46.631 18:19:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.631 18:19:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.631 18:19:40 -- common/autotest_common.sh@10 -- # set +x 00:03:46.631 ************************************ 00:03:46.631 START TEST json_config_extra_key 00:03:46.631 ************************************ 00:03:46.631 18:19:40 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:46.631 18:19:40 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:46.631 18:19:40 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:03:46.631 18:19:40 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:46.631 18:19:40 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:46.631 18:19:40 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.631 18:19:40 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:46.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.631 --rc genhtml_branch_coverage=1 00:03:46.631 --rc genhtml_function_coverage=1 00:03:46.631 --rc genhtml_legend=1 00:03:46.631 --rc geninfo_all_blocks=1 00:03:46.631 --rc geninfo_unexecuted_blocks=1 00:03:46.631 00:03:46.631 ' 00:03:46.631 18:19:40 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:46.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.631 --rc genhtml_branch_coverage=1 00:03:46.631 --rc genhtml_function_coverage=1 00:03:46.631 --rc genhtml_legend=1 00:03:46.631 --rc geninfo_all_blocks=1 00:03:46.631 --rc geninfo_unexecuted_blocks=1 00:03:46.631 00:03:46.631 ' 00:03:46.631 18:19:40 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:46.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.631 --rc genhtml_branch_coverage=1 00:03:46.631 --rc genhtml_function_coverage=1 00:03:46.631 --rc genhtml_legend=1 00:03:46.631 --rc geninfo_all_blocks=1 00:03:46.631 --rc geninfo_unexecuted_blocks=1 00:03:46.631 00:03:46.631 ' 00:03:46.631 18:19:40 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:46.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.631 --rc genhtml_branch_coverage=1 00:03:46.631 --rc genhtml_function_coverage=1 00:03:46.631 --rc genhtml_legend=1 00:03:46.631 --rc geninfo_all_blocks=1 00:03:46.631 --rc geninfo_unexecuted_blocks=1 00:03:46.631 00:03:46.631 ' 00:03:46.631 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.631 18:19:40 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.631 18:19:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.631 18:19:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.631 18:19:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.631 18:19:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:46.631 18:19:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:46.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:46.631 18:19:40 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:46.631 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:46.892 INFO: launching applications... 00:03:46.892 18:19:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=986638 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:46.892 Waiting for target to run... 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 986638 /var/tmp/spdk_tgt.sock 00:03:46.892 18:19:40 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 986638 ']' 00:03:46.892 18:19:40 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:46.892 18:19:40 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:46.892 18:19:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:46.892 18:19:40 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:46.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:46.892 18:19:40 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:46.892 18:19:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:46.892 [2024-10-08 18:19:40.760989] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:46.892 [2024-10-08 18:19:40.761060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986638 ] 00:03:47.152 [2024-10-08 18:19:41.100369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.152 [2024-10-08 18:19:41.143917] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.723 18:19:41 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:47.723 18:19:41 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:03:47.723 18:19:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:47.723 00:03:47.723 18:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:47.723 INFO: shutting down applications... 00:03:47.723 18:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:47.723 18:19:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:47.723 18:19:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:47.723 18:19:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 986638 ]] 00:03:47.723 18:19:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 986638 00:03:47.723 18:19:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:47.723 18:19:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:47.723 18:19:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 986638 00:03:47.723 18:19:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:48.294 18:19:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:48.294 18:19:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:48.294 18:19:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 986638 00:03:48.294 18:19:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:48.294 18:19:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:48.294 18:19:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:48.294 18:19:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:48.294 SPDK target shutdown done 00:03:48.294 18:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:48.294 Success 00:03:48.294 00:03:48.294 real 0m1.586s 00:03:48.294 user 0m1.159s 00:03:48.294 sys 0m0.472s 00:03:48.294 18:19:42 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.294 18:19:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:48.294 ************************************ 00:03:48.294 END TEST json_config_extra_key 00:03:48.294 ************************************ 00:03:48.294 18:19:42 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:48.294 18:19:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.294 18:19:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.294 18:19:42 -- common/autotest_common.sh@10 -- # set +x 00:03:48.294 ************************************ 00:03:48.294 START TEST alias_rpc 00:03:48.294 ************************************ 00:03:48.294 18:19:42 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:48.294 * Looking for test storage... 00:03:48.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:48.294 18:19:42 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:48.294 18:19:42 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:48.294 18:19:42 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:48.294 18:19:42 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.294 18:19:42 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:48.294 18:19:42 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.294 18:19:42 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:48.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.294 --rc genhtml_branch_coverage=1 00:03:48.294 --rc genhtml_function_coverage=1 00:03:48.294 --rc genhtml_legend=1 00:03:48.294 --rc geninfo_all_blocks=1 00:03:48.294 --rc geninfo_unexecuted_blocks=1 00:03:48.294 00:03:48.294 ' 00:03:48.294 18:19:42 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:48.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.294 --rc genhtml_branch_coverage=1 00:03:48.294 --rc genhtml_function_coverage=1 00:03:48.294 --rc genhtml_legend=1 00:03:48.294 --rc geninfo_all_blocks=1 00:03:48.294 --rc geninfo_unexecuted_blocks=1 00:03:48.294 00:03:48.294 ' 00:03:48.294 18:19:42 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:48.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.294 --rc genhtml_branch_coverage=1 00:03:48.294 --rc genhtml_function_coverage=1 00:03:48.294 --rc genhtml_legend=1 00:03:48.294 --rc geninfo_all_blocks=1 00:03:48.294 --rc geninfo_unexecuted_blocks=1 00:03:48.294 00:03:48.294 ' 00:03:48.294 18:19:42 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:48.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.294 --rc genhtml_branch_coverage=1 00:03:48.294 --rc genhtml_function_coverage=1 00:03:48.294 --rc genhtml_legend=1 00:03:48.295 --rc geninfo_all_blocks=1 00:03:48.295 --rc geninfo_unexecuted_blocks=1 00:03:48.295 00:03:48.295 ' 00:03:48.295 18:19:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:48.295 18:19:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=986998 00:03:48.295 18:19:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 986998 00:03:48.295 18:19:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.295 18:19:42 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 986998 ']' 00:03:48.295 18:19:42 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.295 18:19:42 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:48.295 18:19:42 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.295 18:19:42 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:48.295 18:19:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.555 [2024-10-08 18:19:42.407611] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:48.555 [2024-10-08 18:19:42.407686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986998 ] 00:03:48.555 [2024-10-08 18:19:42.485959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.555 [2024-10-08 18:19:42.547802] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:03:49.496 18:19:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:49.496 18:19:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 986998 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 986998 ']' 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 986998 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 986998 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 986998' 00:03:49.496 killing process with pid 986998 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@969 -- # kill 986998 00:03:49.496 18:19:43 alias_rpc -- common/autotest_common.sh@974 -- # wait 986998 00:03:49.757 00:03:49.757 real 0m1.507s 00:03:49.757 user 0m1.635s 00:03:49.757 sys 0m0.434s 00:03:49.757 18:19:43 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.757 18:19:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.757 ************************************ 00:03:49.757 END TEST alias_rpc 00:03:49.757 ************************************ 00:03:49.757 18:19:43 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:49.757 18:19:43 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:49.757 18:19:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.757 18:19:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.757 18:19:43 -- common/autotest_common.sh@10 -- # set +x 00:03:49.757 ************************************ 00:03:49.757 START TEST spdkcli_tcp 00:03:49.757 ************************************ 00:03:49.757 18:19:43 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:50.017 * Looking for test storage... 00:03:50.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.017 18:19:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:50.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.017 --rc genhtml_branch_coverage=1 00:03:50.017 --rc genhtml_function_coverage=1 00:03:50.017 --rc genhtml_legend=1 00:03:50.017 --rc geninfo_all_blocks=1 00:03:50.017 --rc geninfo_unexecuted_blocks=1 00:03:50.017 00:03:50.017 ' 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:50.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.017 --rc genhtml_branch_coverage=1 00:03:50.017 --rc genhtml_function_coverage=1 00:03:50.017 --rc genhtml_legend=1 00:03:50.017 --rc geninfo_all_blocks=1 00:03:50.017 --rc geninfo_unexecuted_blocks=1 00:03:50.017 00:03:50.017 ' 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:50.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.017 --rc genhtml_branch_coverage=1 00:03:50.017 --rc genhtml_function_coverage=1 00:03:50.017 --rc genhtml_legend=1 00:03:50.017 --rc geninfo_all_blocks=1 00:03:50.017 --rc geninfo_unexecuted_blocks=1 00:03:50.017 00:03:50.017 ' 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:50.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.017 --rc genhtml_branch_coverage=1 00:03:50.017 --rc genhtml_function_coverage=1 00:03:50.017 --rc genhtml_legend=1 00:03:50.017 --rc geninfo_all_blocks=1 00:03:50.017 --rc geninfo_unexecuted_blocks=1 00:03:50.017 00:03:50.017 ' 00:03:50.017 18:19:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:50.017 18:19:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:50.017 18:19:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:50.017 18:19:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:50.017 18:19:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:50.017 18:19:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:50.017 18:19:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:50.017 18:19:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=987338 00:03:50.017 18:19:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 987338 00:03:50.017 18:19:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 987338 ']' 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:50.017 18:19:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:50.018 [2024-10-08 18:19:44.000709] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:50.018 [2024-10-08 18:19:44.000779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987338 ] 00:03:50.278 [2024-10-08 18:19:44.083823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:50.278 [2024-10-08 18:19:44.155230] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:03:50.278 [2024-10-08 18:19:44.155250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.848 18:19:44 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:50.848 18:19:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:03:50.848 18:19:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:50.848 18:19:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=987523 00:03:50.849 18:19:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:51.110 [ 00:03:51.110 "bdev_malloc_delete", 00:03:51.110 "bdev_malloc_create", 00:03:51.110 "bdev_null_resize", 00:03:51.110 "bdev_null_delete", 00:03:51.110 "bdev_null_create", 00:03:51.110 "bdev_nvme_cuse_unregister", 00:03:51.110 "bdev_nvme_cuse_register", 00:03:51.110 "bdev_opal_new_user", 00:03:51.110 "bdev_opal_set_lock_state", 00:03:51.110 "bdev_opal_delete", 00:03:51.110 "bdev_opal_get_info", 00:03:51.110 "bdev_opal_create", 00:03:51.110 "bdev_nvme_opal_revert", 00:03:51.110 "bdev_nvme_opal_init", 00:03:51.110 "bdev_nvme_send_cmd", 00:03:51.110 "bdev_nvme_set_keys", 00:03:51.110 "bdev_nvme_get_path_iostat", 00:03:51.110 "bdev_nvme_get_mdns_discovery_info", 00:03:51.110 "bdev_nvme_stop_mdns_discovery", 00:03:51.110 "bdev_nvme_start_mdns_discovery", 00:03:51.110 "bdev_nvme_set_multipath_policy", 00:03:51.110 "bdev_nvme_set_preferred_path", 00:03:51.110 "bdev_nvme_get_io_paths", 00:03:51.110 "bdev_nvme_remove_error_injection", 00:03:51.110 "bdev_nvme_add_error_injection", 00:03:51.110 "bdev_nvme_get_discovery_info", 00:03:51.110 "bdev_nvme_stop_discovery", 00:03:51.110 "bdev_nvme_start_discovery", 00:03:51.110 "bdev_nvme_get_controller_health_info", 00:03:51.110 "bdev_nvme_disable_controller", 00:03:51.110 "bdev_nvme_enable_controller", 00:03:51.110 "bdev_nvme_reset_controller", 00:03:51.110 "bdev_nvme_get_transport_statistics", 00:03:51.110 "bdev_nvme_apply_firmware", 00:03:51.110 "bdev_nvme_detach_controller", 00:03:51.110 "bdev_nvme_get_controllers", 00:03:51.110 "bdev_nvme_attach_controller", 00:03:51.110 "bdev_nvme_set_hotplug", 00:03:51.110 "bdev_nvme_set_options", 00:03:51.110 "bdev_passthru_delete", 00:03:51.110 "bdev_passthru_create", 00:03:51.110 "bdev_lvol_set_parent_bdev", 00:03:51.110 "bdev_lvol_set_parent", 00:03:51.110 "bdev_lvol_check_shallow_copy", 00:03:51.110 "bdev_lvol_start_shallow_copy", 00:03:51.110 "bdev_lvol_grow_lvstore", 00:03:51.110 "bdev_lvol_get_lvols", 00:03:51.110 "bdev_lvol_get_lvstores", 00:03:51.110 "bdev_lvol_delete", 00:03:51.110 "bdev_lvol_set_read_only", 00:03:51.110 "bdev_lvol_resize", 00:03:51.110 "bdev_lvol_decouple_parent", 00:03:51.110 "bdev_lvol_inflate", 00:03:51.110 "bdev_lvol_rename", 00:03:51.110 "bdev_lvol_clone_bdev", 00:03:51.110 "bdev_lvol_clone", 00:03:51.110 "bdev_lvol_snapshot", 00:03:51.110 "bdev_lvol_create", 00:03:51.110 "bdev_lvol_delete_lvstore", 00:03:51.110 "bdev_lvol_rename_lvstore", 00:03:51.110 "bdev_lvol_create_lvstore", 00:03:51.110 "bdev_raid_set_options", 00:03:51.110 "bdev_raid_remove_base_bdev", 00:03:51.110 "bdev_raid_add_base_bdev", 00:03:51.110 "bdev_raid_delete", 00:03:51.110 "bdev_raid_create", 00:03:51.110 "bdev_raid_get_bdevs", 00:03:51.110 "bdev_error_inject_error", 00:03:51.110 "bdev_error_delete", 00:03:51.110 "bdev_error_create", 00:03:51.110 "bdev_split_delete", 00:03:51.110 "bdev_split_create", 00:03:51.110 "bdev_delay_delete", 00:03:51.110 "bdev_delay_create", 00:03:51.110 "bdev_delay_update_latency", 00:03:51.110 "bdev_zone_block_delete", 00:03:51.110 "bdev_zone_block_create", 00:03:51.110 "blobfs_create", 00:03:51.110 "blobfs_detect", 00:03:51.110 "blobfs_set_cache_size", 00:03:51.110 "bdev_aio_delete", 00:03:51.110 "bdev_aio_rescan", 00:03:51.110 "bdev_aio_create", 00:03:51.110 "bdev_ftl_set_property", 00:03:51.110 "bdev_ftl_get_properties", 00:03:51.110 "bdev_ftl_get_stats", 00:03:51.110 "bdev_ftl_unmap", 00:03:51.110 "bdev_ftl_unload", 00:03:51.110 "bdev_ftl_delete", 00:03:51.110 "bdev_ftl_load", 00:03:51.110 "bdev_ftl_create", 00:03:51.110 "bdev_virtio_attach_controller", 00:03:51.110 "bdev_virtio_scsi_get_devices", 00:03:51.110 "bdev_virtio_detach_controller", 00:03:51.110 "bdev_virtio_blk_set_hotplug", 00:03:51.110 "bdev_iscsi_delete", 00:03:51.110 "bdev_iscsi_create", 00:03:51.110 "bdev_iscsi_set_options", 00:03:51.110 "accel_error_inject_error", 00:03:51.110 "ioat_scan_accel_module", 00:03:51.110 "dsa_scan_accel_module", 00:03:51.110 "iaa_scan_accel_module", 00:03:51.110 "vfu_virtio_create_fs_endpoint", 00:03:51.110 "vfu_virtio_create_scsi_endpoint", 00:03:51.110 "vfu_virtio_scsi_remove_target", 00:03:51.110 "vfu_virtio_scsi_add_target", 00:03:51.110 "vfu_virtio_create_blk_endpoint", 00:03:51.110 "vfu_virtio_delete_endpoint", 00:03:51.110 "keyring_file_remove_key", 00:03:51.110 "keyring_file_add_key", 00:03:51.110 "keyring_linux_set_options", 00:03:51.110 "fsdev_aio_delete", 00:03:51.110 "fsdev_aio_create", 00:03:51.110 "iscsi_get_histogram", 00:03:51.110 "iscsi_enable_histogram", 00:03:51.110 "iscsi_set_options", 00:03:51.110 "iscsi_get_auth_groups", 00:03:51.110 "iscsi_auth_group_remove_secret", 00:03:51.110 "iscsi_auth_group_add_secret", 00:03:51.110 "iscsi_delete_auth_group", 00:03:51.110 "iscsi_create_auth_group", 00:03:51.110 "iscsi_set_discovery_auth", 00:03:51.110 "iscsi_get_options", 00:03:51.110 "iscsi_target_node_request_logout", 00:03:51.110 "iscsi_target_node_set_redirect", 00:03:51.110 "iscsi_target_node_set_auth", 00:03:51.110 "iscsi_target_node_add_lun", 00:03:51.110 "iscsi_get_stats", 00:03:51.110 "iscsi_get_connections", 00:03:51.110 "iscsi_portal_group_set_auth", 00:03:51.110 "iscsi_start_portal_group", 00:03:51.110 "iscsi_delete_portal_group", 00:03:51.110 "iscsi_create_portal_group", 00:03:51.110 "iscsi_get_portal_groups", 00:03:51.110 "iscsi_delete_target_node", 00:03:51.110 "iscsi_target_node_remove_pg_ig_maps", 00:03:51.110 "iscsi_target_node_add_pg_ig_maps", 00:03:51.110 "iscsi_create_target_node", 00:03:51.110 "iscsi_get_target_nodes", 00:03:51.110 "iscsi_delete_initiator_group", 00:03:51.110 "iscsi_initiator_group_remove_initiators", 00:03:51.111 "iscsi_initiator_group_add_initiators", 00:03:51.111 "iscsi_create_initiator_group", 00:03:51.111 "iscsi_get_initiator_groups", 00:03:51.111 "nvmf_set_crdt", 00:03:51.111 "nvmf_set_config", 00:03:51.111 "nvmf_set_max_subsystems", 00:03:51.111 "nvmf_stop_mdns_prr", 00:03:51.111 "nvmf_publish_mdns_prr", 00:03:51.111 "nvmf_subsystem_get_listeners", 00:03:51.111 "nvmf_subsystem_get_qpairs", 00:03:51.111 "nvmf_subsystem_get_controllers", 00:03:51.111 "nvmf_get_stats", 00:03:51.111 "nvmf_get_transports", 00:03:51.111 "nvmf_create_transport", 00:03:51.111 "nvmf_get_targets", 00:03:51.111 "nvmf_delete_target", 00:03:51.111 "nvmf_create_target", 00:03:51.111 "nvmf_subsystem_allow_any_host", 00:03:51.111 "nvmf_subsystem_set_keys", 00:03:51.111 "nvmf_subsystem_remove_host", 00:03:51.111 "nvmf_subsystem_add_host", 00:03:51.111 "nvmf_ns_remove_host", 00:03:51.111 "nvmf_ns_add_host", 00:03:51.111 "nvmf_subsystem_remove_ns", 00:03:51.111 "nvmf_subsystem_set_ns_ana_group", 00:03:51.111 "nvmf_subsystem_add_ns", 00:03:51.111 "nvmf_subsystem_listener_set_ana_state", 00:03:51.111 "nvmf_discovery_get_referrals", 00:03:51.111 "nvmf_discovery_remove_referral", 00:03:51.111 "nvmf_discovery_add_referral", 00:03:51.111 "nvmf_subsystem_remove_listener", 00:03:51.111 "nvmf_subsystem_add_listener", 00:03:51.111 "nvmf_delete_subsystem", 00:03:51.111 "nvmf_create_subsystem", 00:03:51.111 "nvmf_get_subsystems", 00:03:51.111 "env_dpdk_get_mem_stats", 00:03:51.111 "nbd_get_disks", 00:03:51.111 "nbd_stop_disk", 00:03:51.111 "nbd_start_disk", 00:03:51.111 "ublk_recover_disk", 00:03:51.111 "ublk_get_disks", 00:03:51.111 "ublk_stop_disk", 00:03:51.111 "ublk_start_disk", 00:03:51.111 "ublk_destroy_target", 00:03:51.111 "ublk_create_target", 00:03:51.111 "virtio_blk_create_transport", 00:03:51.111 "virtio_blk_get_transports", 00:03:51.111 "vhost_controller_set_coalescing", 00:03:51.111 "vhost_get_controllers", 00:03:51.111 "vhost_delete_controller", 00:03:51.111 "vhost_create_blk_controller", 00:03:51.111 "vhost_scsi_controller_remove_target", 00:03:51.111 "vhost_scsi_controller_add_target", 00:03:51.111 "vhost_start_scsi_controller", 00:03:51.111 "vhost_create_scsi_controller", 00:03:51.111 "thread_set_cpumask", 00:03:51.111 "scheduler_set_options", 00:03:51.111 "framework_get_governor", 00:03:51.111 "framework_get_scheduler", 00:03:51.111 "framework_set_scheduler", 00:03:51.111 "framework_get_reactors", 00:03:51.111 "thread_get_io_channels", 00:03:51.111 "thread_get_pollers", 00:03:51.111 "thread_get_stats", 00:03:51.111 "framework_monitor_context_switch", 00:03:51.111 "spdk_kill_instance", 00:03:51.111 "log_enable_timestamps", 00:03:51.111 "log_get_flags", 00:03:51.111 "log_clear_flag", 00:03:51.111 "log_set_flag", 00:03:51.111 "log_get_level", 00:03:51.111 "log_set_level", 00:03:51.111 "log_get_print_level", 00:03:51.111 "log_set_print_level", 00:03:51.111 "framework_enable_cpumask_locks", 00:03:51.111 "framework_disable_cpumask_locks", 00:03:51.111 "framework_wait_init", 00:03:51.111 "framework_start_init", 00:03:51.111 "scsi_get_devices", 00:03:51.111 "bdev_get_histogram", 00:03:51.111 "bdev_enable_histogram", 00:03:51.111 "bdev_set_qos_limit", 00:03:51.111 "bdev_set_qd_sampling_period", 00:03:51.111 "bdev_get_bdevs", 00:03:51.111 "bdev_reset_iostat", 00:03:51.111 "bdev_get_iostat", 00:03:51.111 "bdev_examine", 00:03:51.111 "bdev_wait_for_examine", 00:03:51.111 "bdev_set_options", 00:03:51.111 "accel_get_stats", 00:03:51.111 "accel_set_options", 00:03:51.111 "accel_set_driver", 00:03:51.111 "accel_crypto_key_destroy", 00:03:51.111 "accel_crypto_keys_get", 00:03:51.111 "accel_crypto_key_create", 00:03:51.111 "accel_assign_opc", 00:03:51.111 "accel_get_module_info", 00:03:51.111 "accel_get_opc_assignments", 00:03:51.111 "vmd_rescan", 00:03:51.111 "vmd_remove_device", 00:03:51.111 "vmd_enable", 00:03:51.111 "sock_get_default_impl", 00:03:51.111 "sock_set_default_impl", 00:03:51.111 "sock_impl_set_options", 00:03:51.111 "sock_impl_get_options", 00:03:51.111 "iobuf_get_stats", 00:03:51.111 "iobuf_set_options", 00:03:51.111 "keyring_get_keys", 00:03:51.111 "vfu_tgt_set_base_path", 00:03:51.111 "framework_get_pci_devices", 00:03:51.111 "framework_get_config", 00:03:51.111 "framework_get_subsystems", 00:03:51.111 "fsdev_set_opts", 00:03:51.111 "fsdev_get_opts", 00:03:51.111 "trace_get_info", 00:03:51.111 "trace_get_tpoint_group_mask", 00:03:51.111 "trace_disable_tpoint_group", 00:03:51.111 "trace_enable_tpoint_group", 00:03:51.111 "trace_clear_tpoint_mask", 00:03:51.111 "trace_set_tpoint_mask", 00:03:51.111 "notify_get_notifications", 00:03:51.111 "notify_get_types", 00:03:51.111 "spdk_get_version", 00:03:51.111 "rpc_get_methods" 00:03:51.111 ] 00:03:51.111 18:19:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:51.111 18:19:44 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:51.111 18:19:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:51.111 18:19:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:51.111 18:19:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 987338 00:03:51.111 18:19:45 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 987338 ']' 00:03:51.111 18:19:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 987338 00:03:51.111 18:19:45 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:03:51.111 18:19:45 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:51.111 18:19:45 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 987338 00:03:51.111 18:19:45 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:51.111 18:19:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:51.111 18:19:45 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 987338' 00:03:51.111 killing process with pid 987338 00:03:51.111 18:19:45 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 987338 00:03:51.111 18:19:45 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 987338 00:03:51.373 00:03:51.373 real 0m1.563s 00:03:51.373 user 0m2.827s 00:03:51.373 sys 0m0.460s 00:03:51.373 18:19:45 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.373 18:19:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:51.373 ************************************ 00:03:51.373 END TEST spdkcli_tcp 00:03:51.373 ************************************ 00:03:51.373 18:19:45 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:51.373 18:19:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.373 18:19:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.373 18:19:45 -- common/autotest_common.sh@10 -- # set +x 00:03:51.373 ************************************ 00:03:51.373 START TEST dpdk_mem_utility 00:03:51.373 ************************************ 00:03:51.373 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:51.634 * Looking for test storage... 00:03:51.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:51.634 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:51.634 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:03:51.634 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:51.634 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.634 18:19:45 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:51.634 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.634 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:51.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.634 --rc genhtml_branch_coverage=1 00:03:51.634 --rc genhtml_function_coverage=1 00:03:51.634 --rc genhtml_legend=1 00:03:51.634 --rc geninfo_all_blocks=1 00:03:51.634 --rc geninfo_unexecuted_blocks=1 00:03:51.634 00:03:51.634 ' 00:03:51.634 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:51.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.634 --rc genhtml_branch_coverage=1 00:03:51.634 --rc genhtml_function_coverage=1 00:03:51.634 --rc genhtml_legend=1 00:03:51.634 --rc geninfo_all_blocks=1 00:03:51.635 --rc geninfo_unexecuted_blocks=1 00:03:51.635 00:03:51.635 ' 00:03:51.635 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:51.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.635 --rc genhtml_branch_coverage=1 00:03:51.635 --rc genhtml_function_coverage=1 00:03:51.635 --rc genhtml_legend=1 00:03:51.635 --rc geninfo_all_blocks=1 00:03:51.635 --rc geninfo_unexecuted_blocks=1 00:03:51.635 00:03:51.635 ' 00:03:51.635 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:51.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.635 --rc genhtml_branch_coverage=1 00:03:51.635 --rc genhtml_function_coverage=1 00:03:51.635 --rc genhtml_legend=1 00:03:51.635 --rc geninfo_all_blocks=1 00:03:51.635 --rc geninfo_unexecuted_blocks=1 00:03:51.635 00:03:51.635 ' 00:03:51.635 18:19:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:51.635 18:19:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=987706 00:03:51.635 18:19:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 987706 00:03:51.635 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 987706 ']' 00:03:51.635 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.635 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:51.635 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.635 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:51.635 18:19:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:51.635 18:19:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.635 [2024-10-08 18:19:45.626739] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:51.635 [2024-10-08 18:19:45.626806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987706 ] 00:03:51.895 [2024-10-08 18:19:45.709195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.895 [2024-10-08 18:19:45.777300] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.466 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:52.466 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:03:52.466 18:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:52.466 18:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:52.466 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.466 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:52.466 { 00:03:52.466 "filename": "/tmp/spdk_mem_dump.txt" 00:03:52.466 } 00:03:52.466 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.466 18:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:52.466 DPDK memory size 860.000000 MiB in 1 heap(s) 00:03:52.466 1 heaps totaling size 860.000000 MiB 00:03:52.466 size: 860.000000 MiB heap id: 0 00:03:52.466 end heaps---------- 00:03:52.466 9 mempools totaling size 642.649841 MiB 00:03:52.466 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:52.466 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:52.466 size: 92.545471 MiB name: bdev_io_987706 00:03:52.466 size: 51.011292 MiB name: evtpool_987706 00:03:52.466 size: 50.003479 MiB name: msgpool_987706 00:03:52.466 size: 36.509338 MiB name: fsdev_io_987706 00:03:52.466 size: 21.763794 MiB name: PDU_Pool 00:03:52.466 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:52.466 size: 0.026123 MiB name: Session_Pool 00:03:52.466 end mempools------- 00:03:52.466 6 memzones totaling size 4.142822 MiB 00:03:52.466 size: 1.000366 MiB name: RG_ring_0_987706 00:03:52.466 size: 1.000366 MiB name: RG_ring_1_987706 00:03:52.466 size: 1.000366 MiB name: RG_ring_4_987706 00:03:52.466 size: 1.000366 MiB name: RG_ring_5_987706 00:03:52.466 size: 0.125366 MiB name: RG_ring_2_987706 00:03:52.466 size: 0.015991 MiB name: RG_ring_3_987706 00:03:52.466 end memzones------- 00:03:52.466 18:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:52.728 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:03:52.728 list of free elements. size: 13.984680 MiB 00:03:52.728 element at address: 0x200000400000 with size: 1.999512 MiB 00:03:52.728 element at address: 0x200000800000 with size: 1.996948 MiB 00:03:52.728 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:03:52.728 element at address: 0x20001be00000 with size: 0.999878 MiB 00:03:52.728 element at address: 0x200034a00000 with size: 0.994446 MiB 00:03:52.728 element at address: 0x200009600000 with size: 0.959839 MiB 00:03:52.728 element at address: 0x200015e00000 with size: 0.954285 MiB 00:03:52.728 element at address: 0x20001c000000 with size: 0.936584 MiB 00:03:52.728 element at address: 0x200000200000 with size: 0.841614 MiB 00:03:52.728 element at address: 0x20001d800000 with size: 0.582886 MiB 00:03:52.728 element at address: 0x200003e00000 with size: 0.495422 MiB 00:03:52.728 element at address: 0x20000d800000 with size: 0.490723 MiB 00:03:52.728 element at address: 0x20001c200000 with size: 0.485657 MiB 00:03:52.728 element at address: 0x200007000000 with size: 0.481934 MiB 00:03:52.728 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:03:52.728 element at address: 0x200003a00000 with size: 0.355042 MiB 00:03:52.728 list of standard malloc elements. size: 199.218628 MiB 00:03:52.728 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:03:52.728 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:03:52.728 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:03:52.728 element at address: 0x20001befff80 with size: 1.000122 MiB 00:03:52.728 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:03:52.728 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:52.728 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:03:52.728 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:52.728 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:03:52.728 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:03:52.728 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:03:52.728 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:03:52.728 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:03:52.728 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:03:52.728 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:52.728 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200003aff940 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200003affb40 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200003eff000 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20000707b600 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:03:52.728 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:03:52.728 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:03:52.728 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20001d895380 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20001d895440 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:03:52.728 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:03:52.728 list of memzone associated elements. size: 646.796692 MiB 00:03:52.728 element at address: 0x20001d895500 with size: 211.416748 MiB 00:03:52.728 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:52.728 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:03:52.728 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:52.728 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:03:52.728 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_987706_0 00:03:52.728 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:03:52.728 associated memzone info: size: 48.002930 MiB name: MP_evtpool_987706_0 00:03:52.728 element at address: 0x200003fff380 with size: 48.003052 MiB 00:03:52.728 associated memzone info: size: 48.002930 MiB name: MP_msgpool_987706_0 00:03:52.728 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:03:52.728 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_987706_0 00:03:52.728 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:03:52.728 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:52.728 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:03:52.728 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:52.728 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:03:52.728 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_987706 00:03:52.728 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:03:52.728 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_987706 00:03:52.728 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:52.728 associated memzone info: size: 1.007996 MiB name: MP_evtpool_987706 00:03:52.728 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:03:52.728 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:52.728 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:03:52.728 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:52.728 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:03:52.728 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:52.728 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:03:52.728 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:52.728 element at address: 0x200003eff180 with size: 1.000488 MiB 00:03:52.728 associated memzone info: size: 1.000366 MiB name: RG_ring_0_987706 00:03:52.728 element at address: 0x200003affc00 with size: 1.000488 MiB 00:03:52.728 associated memzone info: size: 1.000366 MiB name: RG_ring_1_987706 00:03:52.728 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:03:52.728 associated memzone info: size: 1.000366 MiB name: RG_ring_4_987706 00:03:52.728 element at address: 0x200034afe940 with size: 1.000488 MiB 00:03:52.728 associated memzone info: size: 1.000366 MiB name: RG_ring_5_987706 00:03:52.728 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:03:52.728 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_987706 00:03:52.728 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:03:52.728 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_987706 00:03:52.728 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:03:52.729 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:52.729 element at address: 0x20000707b780 with size: 0.500488 MiB 00:03:52.729 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:52.729 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:03:52.729 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:52.729 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:03:52.729 associated memzone info: size: 0.125366 MiB name: RG_ring_2_987706 00:03:52.729 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:03:52.729 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:52.729 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:03:52.729 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:52.729 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:03:52.729 associated memzone info: size: 0.015991 MiB name: RG_ring_3_987706 00:03:52.729 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:03:52.729 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:52.729 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:03:52.729 associated memzone info: size: 0.000183 MiB name: MP_msgpool_987706 00:03:52.729 element at address: 0x200003affa00 with size: 0.000305 MiB 00:03:52.729 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_987706 00:03:52.729 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:03:52.729 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_987706 00:03:52.729 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:03:52.729 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:52.729 18:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:52.729 18:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 987706 00:03:52.729 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 987706 ']' 00:03:52.729 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 987706 00:03:52.729 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:03:52.729 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:52.729 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 987706 00:03:52.729 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:52.729 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:52.729 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 987706' 00:03:52.729 killing process with pid 987706 00:03:52.729 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 987706 00:03:52.729 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 987706 00:03:52.990 00:03:52.990 real 0m1.443s 00:03:52.990 user 0m1.535s 00:03:52.990 sys 0m0.425s 00:03:52.990 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.990 18:19:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:52.990 ************************************ 00:03:52.990 END TEST dpdk_mem_utility 00:03:52.990 ************************************ 00:03:52.990 18:19:46 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:52.991 18:19:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.991 18:19:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.991 18:19:46 -- common/autotest_common.sh@10 -- # set +x 00:03:52.991 ************************************ 00:03:52.991 START TEST event 00:03:52.991 ************************************ 00:03:52.991 18:19:46 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:52.991 * Looking for test storage... 00:03:52.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:52.991 18:19:46 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:52.991 18:19:46 event -- common/autotest_common.sh@1681 -- # lcov --version 00:03:52.991 18:19:46 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:53.252 18:19:47 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:53.252 18:19:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.252 18:19:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.252 18:19:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.252 18:19:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.252 18:19:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.252 18:19:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.252 18:19:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.252 18:19:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.252 18:19:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.252 18:19:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.252 18:19:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.252 18:19:47 event -- scripts/common.sh@344 -- # case "$op" in 00:03:53.252 18:19:47 event -- scripts/common.sh@345 -- # : 1 00:03:53.252 18:19:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.252 18:19:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.252 18:19:47 event -- scripts/common.sh@365 -- # decimal 1 00:03:53.252 18:19:47 event -- scripts/common.sh@353 -- # local d=1 00:03:53.252 18:19:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.252 18:19:47 event -- scripts/common.sh@355 -- # echo 1 00:03:53.252 18:19:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.252 18:19:47 event -- scripts/common.sh@366 -- # decimal 2 00:03:53.252 18:19:47 event -- scripts/common.sh@353 -- # local d=2 00:03:53.252 18:19:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.252 18:19:47 event -- scripts/common.sh@355 -- # echo 2 00:03:53.252 18:19:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.252 18:19:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.252 18:19:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.252 18:19:47 event -- scripts/common.sh@368 -- # return 0 00:03:53.252 18:19:47 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.252 18:19:47 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.252 --rc genhtml_branch_coverage=1 00:03:53.252 --rc genhtml_function_coverage=1 00:03:53.252 --rc genhtml_legend=1 00:03:53.252 --rc geninfo_all_blocks=1 00:03:53.252 --rc geninfo_unexecuted_blocks=1 00:03:53.252 00:03:53.252 ' 00:03:53.252 18:19:47 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.252 --rc genhtml_branch_coverage=1 00:03:53.252 --rc genhtml_function_coverage=1 00:03:53.252 --rc genhtml_legend=1 00:03:53.252 --rc geninfo_all_blocks=1 00:03:53.252 --rc geninfo_unexecuted_blocks=1 00:03:53.252 00:03:53.252 ' 00:03:53.252 18:19:47 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.252 --rc genhtml_branch_coverage=1 00:03:53.252 --rc genhtml_function_coverage=1 00:03:53.252 --rc genhtml_legend=1 00:03:53.252 --rc geninfo_all_blocks=1 00:03:53.252 --rc geninfo_unexecuted_blocks=1 00:03:53.252 00:03:53.252 ' 00:03:53.252 18:19:47 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.252 --rc genhtml_branch_coverage=1 00:03:53.252 --rc genhtml_function_coverage=1 00:03:53.252 --rc genhtml_legend=1 00:03:53.252 --rc geninfo_all_blocks=1 00:03:53.252 --rc geninfo_unexecuted_blocks=1 00:03:53.252 00:03:53.252 ' 00:03:53.252 18:19:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:53.252 18:19:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:53.252 18:19:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:53.252 18:19:47 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:03:53.252 18:19:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.252 18:19:47 event -- common/autotest_common.sh@10 -- # set +x 00:03:53.252 ************************************ 00:03:53.252 START TEST event_perf 00:03:53.252 ************************************ 00:03:53.252 18:19:47 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:53.252 Running I/O for 1 seconds...[2024-10-08 18:19:47.156138] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:53.252 [2024-10-08 18:19:47.156241] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988039 ] 00:03:53.252 [2024-10-08 18:19:47.240163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:53.512 [2024-10-08 18:19:47.312997] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:03:53.512 [2024-10-08 18:19:47.313099] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:03:53.512 [2024-10-08 18:19:47.313389] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:03:53.512 [2024-10-08 18:19:47.313390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.456 Running I/O for 1 seconds... 00:03:54.456 lcore 0: 178996 00:03:54.456 lcore 1: 178998 00:03:54.456 lcore 2: 178996 00:03:54.456 lcore 3: 178996 00:03:54.456 done. 00:03:54.456 00:03:54.456 real 0m1.223s 00:03:54.456 user 0m4.121s 00:03:54.456 sys 0m0.098s 00:03:54.456 18:19:48 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.456 18:19:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:54.456 ************************************ 00:03:54.456 END TEST event_perf 00:03:54.456 ************************************ 00:03:54.456 18:19:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:54.456 18:19:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:03:54.456 18:19:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.456 18:19:48 event -- common/autotest_common.sh@10 -- # set +x 00:03:54.456 ************************************ 00:03:54.456 START TEST event_reactor 00:03:54.456 ************************************ 00:03:54.456 18:19:48 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:54.456 [2024-10-08 18:19:48.454013] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:54.456 [2024-10-08 18:19:48.454106] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988358 ] 00:03:54.718 [2024-10-08 18:19:48.536653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.718 [2024-10-08 18:19:48.591735] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.659 test_start 00:03:55.659 oneshot 00:03:55.659 tick 100 00:03:55.659 tick 100 00:03:55.659 tick 250 00:03:55.659 tick 100 00:03:55.659 tick 100 00:03:55.659 tick 250 00:03:55.659 tick 100 00:03:55.659 tick 500 00:03:55.659 tick 100 00:03:55.659 tick 100 00:03:55.659 tick 250 00:03:55.659 tick 100 00:03:55.659 tick 100 00:03:55.659 test_end 00:03:55.659 00:03:55.659 real 0m1.204s 00:03:55.659 user 0m1.118s 00:03:55.659 sys 0m0.082s 00:03:55.659 18:19:49 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.659 18:19:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:55.659 ************************************ 00:03:55.659 END TEST event_reactor 00:03:55.659 ************************************ 00:03:55.659 18:19:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:55.659 18:19:49 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:03:55.659 18:19:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.659 18:19:49 event -- common/autotest_common.sh@10 -- # set +x 00:03:55.659 ************************************ 00:03:55.659 START TEST event_reactor_perf 00:03:55.659 ************************************ 00:03:55.659 18:19:49 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:55.919 [2024-10-08 18:19:49.734888] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:55.919 [2024-10-08 18:19:49.735002] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988706 ] 00:03:55.919 [2024-10-08 18:19:49.813863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.919 [2024-10-08 18:19:49.872867] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.859 test_start 00:03:56.859 test_end 00:03:56.859 Performance: 538505 events per second 00:03:56.859 00:03:56.859 real 0m1.204s 00:03:56.859 user 0m1.122s 00:03:56.859 sys 0m0.078s 00:03:56.859 18:19:50 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.119 18:19:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:57.119 ************************************ 00:03:57.119 END TEST event_reactor_perf 00:03:57.119 ************************************ 00:03:57.119 18:19:50 event -- event/event.sh@49 -- # uname -s 00:03:57.119 18:19:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:57.119 18:19:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:57.119 18:19:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.119 18:19:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.120 18:19:50 event -- common/autotest_common.sh@10 -- # set +x 00:03:57.120 ************************************ 00:03:57.120 START TEST event_scheduler 00:03:57.120 ************************************ 00:03:57.120 18:19:51 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:57.120 * Looking for test storage... 00:03:57.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:03:57.120 18:19:51 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:57.120 18:19:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:03:57.120 18:19:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:57.380 18:19:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.380 18:19:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:57.381 18:19:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.381 18:19:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.381 18:19:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.381 18:19:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.381 --rc genhtml_branch_coverage=1 00:03:57.381 --rc genhtml_function_coverage=1 00:03:57.381 --rc genhtml_legend=1 00:03:57.381 --rc geninfo_all_blocks=1 00:03:57.381 --rc geninfo_unexecuted_blocks=1 00:03:57.381 00:03:57.381 ' 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.381 --rc genhtml_branch_coverage=1 00:03:57.381 --rc genhtml_function_coverage=1 00:03:57.381 --rc genhtml_legend=1 00:03:57.381 --rc geninfo_all_blocks=1 00:03:57.381 --rc geninfo_unexecuted_blocks=1 00:03:57.381 00:03:57.381 ' 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.381 --rc genhtml_branch_coverage=1 00:03:57.381 --rc genhtml_function_coverage=1 00:03:57.381 --rc genhtml_legend=1 00:03:57.381 --rc geninfo_all_blocks=1 00:03:57.381 --rc geninfo_unexecuted_blocks=1 00:03:57.381 00:03:57.381 ' 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.381 --rc genhtml_branch_coverage=1 00:03:57.381 --rc genhtml_function_coverage=1 00:03:57.381 --rc genhtml_legend=1 00:03:57.381 --rc geninfo_all_blocks=1 00:03:57.381 --rc geninfo_unexecuted_blocks=1 00:03:57.381 00:03:57.381 ' 00:03:57.381 18:19:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:57.381 18:19:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=989099 00:03:57.381 18:19:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.381 18:19:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 989099 00:03:57.381 18:19:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 989099 ']' 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:57.381 18:19:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:57.381 [2024-10-08 18:19:51.253743] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:03:57.381 [2024-10-08 18:19:51.253808] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989099 ] 00:03:57.381 [2024-10-08 18:19:51.340664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:57.381 [2024-10-08 18:19:51.434862] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.381 [2024-10-08 18:19:51.435066] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.381 [2024-10-08 18:19:51.435127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:03:57.381 [2024-10-08 18:19:51.435129] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:03:58.321 18:19:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:58.321 [2024-10-08 18:19:52.073655] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:58.321 [2024-10-08 18:19:52.073673] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:58.321 [2024-10-08 18:19:52.073683] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:58.321 [2024-10-08 18:19:52.073689] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:58.321 [2024-10-08 18:19:52.073695] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.321 18:19:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:58.321 [2024-10-08 18:19:52.140019] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.321 18:19:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:58.321 ************************************ 00:03:58.321 START TEST scheduler_create_thread 00:03:58.321 ************************************ 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.321 2 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.321 3 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.321 4 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.321 5 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.321 6 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.321 7 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.321 8 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.321 18:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:59.262 9 00:03:59.262 18:19:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.262 18:19:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:59.262 18:19:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.262 18:19:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.644 10 00:04:00.644 18:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.644 18:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:00.644 18:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.644 18:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:01.214 18:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.214 18:19:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:01.214 18:19:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:01.214 18:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:01.214 18:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:01.784 18:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.784 18:19:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:01.784 18:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:01.784 18:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.355 18:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.355 18:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:02.355 18:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:02.355 18:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.355 18:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.924 18:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.924 00:04:02.924 real 0m4.567s 00:04:02.924 user 0m0.025s 00:04:02.924 sys 0m0.007s 00:04:02.924 18:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.924 18:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.924 ************************************ 00:04:02.924 END TEST scheduler_create_thread 00:04:02.924 ************************************ 00:04:02.924 18:19:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:02.924 18:19:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 989099 00:04:02.924 18:19:56 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 989099 ']' 00:04:02.924 18:19:56 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 989099 00:04:02.924 18:19:56 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:02.924 18:19:56 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:02.924 18:19:56 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 989099 00:04:02.924 18:19:56 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:02.924 18:19:56 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:02.924 18:19:56 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 989099' 00:04:02.925 killing process with pid 989099 00:04:02.925 18:19:56 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 989099 00:04:02.925 18:19:56 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 989099 00:04:02.925 [2024-10-08 18:19:56.974879] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:03.185 00:04:03.185 real 0m6.177s 00:04:03.185 user 0m14.647s 00:04:03.185 sys 0m0.438s 00:04:03.185 18:19:57 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.185 18:19:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:03.185 ************************************ 00:04:03.185 END TEST event_scheduler 00:04:03.185 ************************************ 00:04:03.185 18:19:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:03.185 18:19:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:03.185 18:19:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.185 18:19:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.185 18:19:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.447 ************************************ 00:04:03.447 START TEST app_repeat 00:04:03.447 ************************************ 00:04:03.447 18:19:57 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=990191 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 990191' 00:04:03.447 Process app_repeat pid: 990191 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:03.447 spdk_app_start Round 0 00:04:03.447 18:19:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 990191 /var/tmp/spdk-nbd.sock 00:04:03.447 18:19:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 990191 ']' 00:04:03.447 18:19:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:03.447 18:19:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:03.447 18:19:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:03.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:03.447 18:19:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:03.447 18:19:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:03.447 [2024-10-08 18:19:57.300457] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:03.447 [2024-10-08 18:19:57.300551] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990191 ] 00:04:03.447 [2024-10-08 18:19:57.379253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:03.447 [2024-10-08 18:19:57.438794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.447 [2024-10-08 18:19:57.438795] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.388 18:19:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:04.388 18:19:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:04.388 18:19:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:04.388 Malloc0 00:04:04.388 18:19:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:04.678 Malloc1 00:04:04.678 18:19:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:04.678 /dev/nbd0 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:04.678 18:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:04.678 18:19:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:04.678 18:19:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:04.678 18:19:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:04.678 18:19:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:04.678 18:19:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:04.678 18:19:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:04.678 18:19:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:04.678 18:19:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:04.678 18:19:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:04.939 1+0 records in 00:04:04.939 1+0 records out 00:04:04.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286405 s, 14.3 MB/s 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:04.939 18:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:04.939 18:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:04.939 18:19:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:04.939 /dev/nbd1 00:04:04.939 18:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:04.939 18:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:04.939 1+0 records in 00:04:04.939 1+0 records out 00:04:04.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269129 s, 15.2 MB/s 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:04.939 18:19:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:04.939 18:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:04.939 18:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:04.939 18:19:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:04.939 18:19:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.939 18:19:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:05.200 { 00:04:05.200 "nbd_device": "/dev/nbd0", 00:04:05.200 "bdev_name": "Malloc0" 00:04:05.200 }, 00:04:05.200 { 00:04:05.200 "nbd_device": "/dev/nbd1", 00:04:05.200 "bdev_name": "Malloc1" 00:04:05.200 } 00:04:05.200 ]' 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:05.200 { 00:04:05.200 "nbd_device": "/dev/nbd0", 00:04:05.200 "bdev_name": "Malloc0" 00:04:05.200 }, 00:04:05.200 { 00:04:05.200 "nbd_device": "/dev/nbd1", 00:04:05.200 "bdev_name": "Malloc1" 00:04:05.200 } 00:04:05.200 ]' 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:05.200 /dev/nbd1' 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:05.200 /dev/nbd1' 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:05.200 256+0 records in 00:04:05.200 256+0 records out 00:04:05.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121439 s, 86.3 MB/s 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:05.200 256+0 records in 00:04:05.200 256+0 records out 00:04:05.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113782 s, 92.2 MB/s 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:05.200 256+0 records in 00:04:05.200 256+0 records out 00:04:05.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012612 s, 83.1 MB/s 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:05.200 18:19:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:05.461 18:19:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:05.722 18:19:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:05.982 18:19:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:05.982 18:19:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:06.243 18:20:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:06.243 [2024-10-08 18:20:00.182396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:06.243 [2024-10-08 18:20:00.234698] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.243 [2024-10-08 18:20:00.234698] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.243 [2024-10-08 18:20:00.263871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:06.243 [2024-10-08 18:20:00.263902] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:09.544 18:20:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:09.544 18:20:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:09.544 spdk_app_start Round 1 00:04:09.544 18:20:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 990191 /var/tmp/spdk-nbd.sock 00:04:09.544 18:20:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 990191 ']' 00:04:09.544 18:20:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:09.544 18:20:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:09.544 18:20:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:09.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:09.544 18:20:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:09.544 18:20:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:09.544 18:20:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:09.544 18:20:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:09.544 18:20:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:09.544 Malloc0 00:04:09.544 18:20:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:09.804 Malloc1 00:04:09.804 18:20:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:09.804 /dev/nbd0 00:04:09.804 18:20:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:10.064 18:20:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:10.064 1+0 records in 00:04:10.064 1+0 records out 00:04:10.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273054 s, 15.0 MB/s 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:10.064 18:20:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:10.064 18:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:10.064 18:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:10.064 18:20:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:10.064 /dev/nbd1 00:04:10.064 18:20:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:10.064 18:20:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:10.064 18:20:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:10.064 18:20:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:10.064 18:20:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:10.064 18:20:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:10.064 18:20:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:10.064 18:20:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:10.064 18:20:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:10.065 18:20:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:10.065 18:20:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:10.065 1+0 records in 00:04:10.065 1+0 records out 00:04:10.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275983 s, 14.8 MB/s 00:04:10.065 18:20:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.065 18:20:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:10.065 18:20:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:10.065 18:20:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:10.065 18:20:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:10.065 18:20:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:10.065 18:20:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:10.326 { 00:04:10.326 "nbd_device": "/dev/nbd0", 00:04:10.326 "bdev_name": "Malloc0" 00:04:10.326 }, 00:04:10.326 { 00:04:10.326 "nbd_device": "/dev/nbd1", 00:04:10.326 "bdev_name": "Malloc1" 00:04:10.326 } 00:04:10.326 ]' 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:10.326 { 00:04:10.326 "nbd_device": "/dev/nbd0", 00:04:10.326 "bdev_name": "Malloc0" 00:04:10.326 }, 00:04:10.326 { 00:04:10.326 "nbd_device": "/dev/nbd1", 00:04:10.326 "bdev_name": "Malloc1" 00:04:10.326 } 00:04:10.326 ]' 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:10.326 /dev/nbd1' 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:10.326 /dev/nbd1' 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:10.326 256+0 records in 00:04:10.326 256+0 records out 00:04:10.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127187 s, 82.4 MB/s 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:10.326 18:20:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:10.587 256+0 records in 00:04:10.587 256+0 records out 00:04:10.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126218 s, 83.1 MB/s 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:10.587 256+0 records in 00:04:10.587 256+0 records out 00:04:10.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136446 s, 76.8 MB/s 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:10.587 18:20:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:10.588 18:20:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:10.848 18:20:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:10.849 18:20:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:10.849 18:20:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:10.849 18:20:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:10.849 18:20:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:10.849 18:20:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:10.849 18:20:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:10.849 18:20:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:10.849 18:20:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:10.849 18:20:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.849 18:20:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:11.108 18:20:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:11.108 18:20:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:11.368 18:20:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:11.368 [2024-10-08 18:20:05.350686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:11.368 [2024-10-08 18:20:05.403632] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.368 [2024-10-08 18:20:05.403632] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.628 [2024-10-08 18:20:05.433366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:11.628 [2024-10-08 18:20:05.433398] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:14.928 18:20:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:14.928 18:20:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:14.928 spdk_app_start Round 2 00:04:14.928 18:20:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 990191 /var/tmp/spdk-nbd.sock 00:04:14.928 18:20:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 990191 ']' 00:04:14.928 18:20:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:14.928 18:20:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:14.928 18:20:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:14.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:14.928 18:20:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:14.928 18:20:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:14.928 18:20:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:14.928 18:20:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:14.928 18:20:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.928 Malloc0 00:04:14.928 18:20:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.928 Malloc1 00:04:14.928 18:20:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.928 18:20:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.928 18:20:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.928 18:20:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:14.928 18:20:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.928 18:20:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:14.928 18:20:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.928 18:20:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.928 18:20:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.928 18:20:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:14.929 18:20:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.929 18:20:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:14.929 18:20:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:14.929 18:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:14.929 18:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.929 18:20:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:15.189 /dev/nbd0 00:04:15.189 18:20:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:15.189 18:20:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.189 1+0 records in 00:04:15.189 1+0 records out 00:04:15.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020938 s, 19.6 MB/s 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:15.189 18:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.189 18:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.189 18:20:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:15.189 /dev/nbd1 00:04:15.189 18:20:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:15.189 18:20:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:15.189 18:20:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.189 1+0 records in 00:04:15.189 1+0 records out 00:04:15.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030536 s, 13.4 MB/s 00:04:15.450 18:20:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.450 18:20:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:15.450 18:20:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.451 18:20:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:15.451 18:20:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:15.451 { 00:04:15.451 "nbd_device": "/dev/nbd0", 00:04:15.451 "bdev_name": "Malloc0" 00:04:15.451 }, 00:04:15.451 { 00:04:15.451 "nbd_device": "/dev/nbd1", 00:04:15.451 "bdev_name": "Malloc1" 00:04:15.451 } 00:04:15.451 ]' 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:15.451 { 00:04:15.451 "nbd_device": "/dev/nbd0", 00:04:15.451 "bdev_name": "Malloc0" 00:04:15.451 }, 00:04:15.451 { 00:04:15.451 "nbd_device": "/dev/nbd1", 00:04:15.451 "bdev_name": "Malloc1" 00:04:15.451 } 00:04:15.451 ]' 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:15.451 /dev/nbd1' 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:15.451 /dev/nbd1' 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:15.451 18:20:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:15.713 256+0 records in 00:04:15.713 256+0 records out 00:04:15.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121207 s, 86.5 MB/s 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:15.713 256+0 records in 00:04:15.713 256+0 records out 00:04:15.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124931 s, 83.9 MB/s 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:15.713 256+0 records in 00:04:15.713 256+0 records out 00:04:15.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132527 s, 79.1 MB/s 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:15.713 18:20:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.975 18:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:16.235 18:20:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:16.235 18:20:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:16.496 18:20:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:16.496 [2024-10-08 18:20:10.493013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:16.496 [2024-10-08 18:20:10.545798] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.496 [2024-10-08 18:20:10.545799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.757 [2024-10-08 18:20:10.574959] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:16.757 [2024-10-08 18:20:10.574994] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:20.058 18:20:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 990191 /var/tmp/spdk-nbd.sock 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 990191 ']' 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:20.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:20.058 18:20:13 event.app_repeat -- event/event.sh@39 -- # killprocess 990191 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 990191 ']' 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 990191 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 990191 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 990191' 00:04:20.058 killing process with pid 990191 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@969 -- # kill 990191 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@974 -- # wait 990191 00:04:20.058 spdk_app_start is called in Round 0. 00:04:20.058 Shutdown signal received, stop current app iteration 00:04:20.058 Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 reinitialization... 00:04:20.058 spdk_app_start is called in Round 1. 00:04:20.058 Shutdown signal received, stop current app iteration 00:04:20.058 Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 reinitialization... 00:04:20.058 spdk_app_start is called in Round 2. 00:04:20.058 Shutdown signal received, stop current app iteration 00:04:20.058 Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 reinitialization... 00:04:20.058 spdk_app_start is called in Round 3. 00:04:20.058 Shutdown signal received, stop current app iteration 00:04:20.058 18:20:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:20.058 18:20:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:20.058 00:04:20.058 real 0m16.489s 00:04:20.058 user 0m36.095s 00:04:20.058 sys 0m2.296s 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.058 18:20:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:20.058 ************************************ 00:04:20.058 END TEST app_repeat 00:04:20.058 ************************************ 00:04:20.058 18:20:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:20.058 18:20:13 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:20.058 18:20:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.058 18:20:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.058 18:20:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.058 ************************************ 00:04:20.058 START TEST cpu_locks 00:04:20.058 ************************************ 00:04:20.058 18:20:13 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:20.058 * Looking for test storage... 00:04:20.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:20.059 18:20:13 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:20.059 18:20:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:04:20.059 18:20:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:20.059 18:20:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:20.059 18:20:13 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.059 18:20:13 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.059 18:20:13 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.059 18:20:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:20.059 18:20:14 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.059 18:20:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:20.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.059 --rc genhtml_branch_coverage=1 00:04:20.059 --rc genhtml_function_coverage=1 00:04:20.059 --rc genhtml_legend=1 00:04:20.059 --rc geninfo_all_blocks=1 00:04:20.059 --rc geninfo_unexecuted_blocks=1 00:04:20.059 00:04:20.059 ' 00:04:20.059 18:20:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:20.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.059 --rc genhtml_branch_coverage=1 00:04:20.059 --rc genhtml_function_coverage=1 00:04:20.059 --rc genhtml_legend=1 00:04:20.059 --rc geninfo_all_blocks=1 00:04:20.059 --rc geninfo_unexecuted_blocks=1 00:04:20.059 00:04:20.059 ' 00:04:20.059 18:20:14 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:20.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.059 --rc genhtml_branch_coverage=1 00:04:20.059 --rc genhtml_function_coverage=1 00:04:20.059 --rc genhtml_legend=1 00:04:20.059 --rc geninfo_all_blocks=1 00:04:20.059 --rc geninfo_unexecuted_blocks=1 00:04:20.059 00:04:20.059 ' 00:04:20.059 18:20:14 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:20.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.059 --rc genhtml_branch_coverage=1 00:04:20.059 --rc genhtml_function_coverage=1 00:04:20.059 --rc genhtml_legend=1 00:04:20.059 --rc geninfo_all_blocks=1 00:04:20.059 --rc geninfo_unexecuted_blocks=1 00:04:20.059 00:04:20.059 ' 00:04:20.059 18:20:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:20.059 18:20:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:20.059 18:20:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:20.059 18:20:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:20.059 18:20:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.059 18:20:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.059 18:20:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.059 ************************************ 00:04:20.059 START TEST default_locks 00:04:20.059 ************************************ 00:04:20.059 18:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:20.059 18:20:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=993766 00:04:20.059 18:20:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 993766 00:04:20.059 18:20:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.059 18:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 993766 ']' 00:04:20.059 18:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.059 18:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.059 18:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.059 18:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.059 18:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.320 [2024-10-08 18:20:14.129685] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:20.320 [2024-10-08 18:20:14.129756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993766 ] 00:04:20.320 [2024-10-08 18:20:14.211810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.320 [2024-10-08 18:20:14.273786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.891 18:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:20.891 18:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:20.891 18:20:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 993766 00:04:20.891 18:20:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 993766 00:04:20.891 18:20:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:21.461 lslocks: write error 00:04:21.461 18:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 993766 00:04:21.461 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 993766 ']' 00:04:21.461 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 993766 00:04:21.461 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:21.461 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.461 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 993766 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 993766' 00:04:21.722 killing process with pid 993766 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 993766 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 993766 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 993766 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 993766 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:21.722 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 993766 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 993766 ']' 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:21.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (993766) - No such process 00:04:21.723 ERROR: process (pid: 993766) is no longer running 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:21.723 00:04:21.723 real 0m1.682s 00:04:21.723 user 0m1.787s 00:04:21.723 sys 0m0.607s 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.723 18:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:21.723 ************************************ 00:04:21.723 END TEST default_locks 00:04:21.723 ************************************ 00:04:21.723 18:20:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:21.723 18:20:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.723 18:20:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.723 18:20:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:21.984 ************************************ 00:04:21.984 START TEST default_locks_via_rpc 00:04:21.984 ************************************ 00:04:21.984 18:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:21.984 18:20:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=994136 00:04:21.984 18:20:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 994136 00:04:21.984 18:20:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.984 18:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 994136 ']' 00:04:21.984 18:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.984 18:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.984 18:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.984 18:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.984 18:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.984 [2024-10-08 18:20:15.869839] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:21.984 [2024-10-08 18:20:15.869896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994136 ] 00:04:21.984 [2024-10-08 18:20:15.949652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.984 [2024-10-08 18:20:16.014035] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.926 18:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.926 18:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 994136 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:22.927 18:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 994136 00:04:23.187 18:20:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 994136 00:04:23.187 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 994136 ']' 00:04:23.187 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 994136 00:04:23.187 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:23.187 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.187 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 994136 00:04:23.447 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.447 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.447 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 994136' 00:04:23.447 killing process with pid 994136 00:04:23.447 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 994136 00:04:23.447 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 994136 00:04:23.447 00:04:23.447 real 0m1.674s 00:04:23.447 user 0m1.769s 00:04:23.447 sys 0m0.602s 00:04:23.447 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.447 18:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.447 ************************************ 00:04:23.447 END TEST default_locks_via_rpc 00:04:23.447 ************************************ 00:04:23.709 18:20:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:23.709 18:20:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.709 18:20:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.709 18:20:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:23.709 ************************************ 00:04:23.709 START TEST non_locking_app_on_locked_coremask 00:04:23.709 ************************************ 00:04:23.709 18:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:23.709 18:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=994505 00:04:23.709 18:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 994505 /var/tmp/spdk.sock 00:04:23.709 18:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.709 18:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 994505 ']' 00:04:23.709 18:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.709 18:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.709 18:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.709 18:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.709 18:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:23.709 [2024-10-08 18:20:17.618101] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:23.709 [2024-10-08 18:20:17.618157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994505 ] 00:04:23.709 [2024-10-08 18:20:17.697844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.709 [2024-10-08 18:20:17.759100] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=994829 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 994829 /var/tmp/spdk2.sock 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 994829 ']' 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:24.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.650 18:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:24.650 [2024-10-08 18:20:18.454749] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:24.650 [2024-10-08 18:20:18.454803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994829 ] 00:04:24.650 [2024-10-08 18:20:18.529294] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:24.650 [2024-10-08 18:20:18.529315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.650 [2024-10-08 18:20:18.639689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.222 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.222 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:25.222 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 994505 00:04:25.222 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 994505 00:04:25.222 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:25.794 lslocks: write error 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 994505 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 994505 ']' 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 994505 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 994505 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 994505' 00:04:25.794 killing process with pid 994505 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 994505 00:04:25.794 18:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 994505 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 994829 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 994829 ']' 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 994829 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 994829 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 994829' 00:04:26.365 killing process with pid 994829 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 994829 00:04:26.365 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 994829 00:04:26.365 00:04:26.365 real 0m2.847s 00:04:26.365 user 0m3.150s 00:04:26.365 sys 0m0.875s 00:04:26.366 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.366 18:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:26.366 ************************************ 00:04:26.366 END TEST non_locking_app_on_locked_coremask 00:04:26.366 ************************************ 00:04:26.627 18:20:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:26.627 18:20:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.627 18:20:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.627 18:20:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:26.627 ************************************ 00:04:26.627 START TEST locking_app_on_unlocked_coremask 00:04:26.627 ************************************ 00:04:26.627 18:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:26.627 18:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=995208 00:04:26.627 18:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 995208 /var/tmp/spdk.sock 00:04:26.627 18:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:26.627 18:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 995208 ']' 00:04:26.627 18:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.627 18:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.627 18:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.627 18:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.627 18:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:26.627 [2024-10-08 18:20:20.541070] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:26.627 [2024-10-08 18:20:20.541131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995208 ] 00:04:26.627 [2024-10-08 18:20:20.620073] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:26.627 [2024-10-08 18:20:20.620098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.627 [2024-10-08 18:20:20.678128] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=995395 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 995395 /var/tmp/spdk2.sock 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 995395 ']' 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:27.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.569 18:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:27.569 [2024-10-08 18:20:21.408104] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:27.570 [2024-10-08 18:20:21.408160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995395 ] 00:04:27.570 [2024-10-08 18:20:21.479391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.570 [2024-10-08 18:20:21.589437] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.141 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.141 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:28.141 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 995395 00:04:28.141 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 995395 00:04:28.141 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:28.713 lslocks: write error 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 995208 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 995208 ']' 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 995208 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 995208 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 995208' 00:04:28.713 killing process with pid 995208 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 995208 00:04:28.713 18:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 995208 00:04:28.973 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 995395 00:04:28.973 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 995395 ']' 00:04:28.973 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 995395 00:04:28.973 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:28.973 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:29.234 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 995395 00:04:29.234 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:29.234 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:29.234 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 995395' 00:04:29.234 killing process with pid 995395 00:04:29.234 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 995395 00:04:29.234 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 995395 00:04:29.234 00:04:29.234 real 0m2.810s 00:04:29.234 user 0m3.148s 00:04:29.234 sys 0m0.852s 00:04:29.494 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.494 18:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.494 ************************************ 00:04:29.494 END TEST locking_app_on_unlocked_coremask 00:04:29.494 ************************************ 00:04:29.495 18:20:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:29.495 18:20:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.495 18:20:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.495 18:20:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.495 ************************************ 00:04:29.495 START TEST locking_app_on_locked_coremask 00:04:29.495 ************************************ 00:04:29.495 18:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:29.495 18:20:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=995913 00:04:29.495 18:20:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 995913 /var/tmp/spdk.sock 00:04:29.495 18:20:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.495 18:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 995913 ']' 00:04:29.495 18:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.495 18:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:29.495 18:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.495 18:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:29.495 18:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.495 [2024-10-08 18:20:23.424966] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:29.495 [2024-10-08 18:20:23.425037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995913 ] 00:04:29.495 [2024-10-08 18:20:23.504546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.755 [2024-10-08 18:20:23.563996] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.327 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.327 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:30.327 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:30.327 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=995930 00:04:30.327 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 995930 /var/tmp/spdk2.sock 00:04:30.327 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:30.327 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 995930 /var/tmp/spdk2.sock 00:04:30.327 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:30.328 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.328 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:30.328 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.328 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 995930 /var/tmp/spdk2.sock 00:04:30.328 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 995930 ']' 00:04:30.328 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:30.328 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.328 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:30.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:30.328 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.328 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.328 [2024-10-08 18:20:24.245811] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:30.328 [2024-10-08 18:20:24.245863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995930 ] 00:04:30.328 [2024-10-08 18:20:24.319971] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 995913 has claimed it. 00:04:30.328 [2024-10-08 18:20:24.320006] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:30.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (995930) - No such process 00:04:30.900 ERROR: process (pid: 995930) is no longer running 00:04:30.900 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.900 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:30.900 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:30.900 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:30.900 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:30.900 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:30.900 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 995913 00:04:30.900 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 995913 00:04:30.900 18:20:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.471 lslocks: write error 00:04:31.471 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 995913 00:04:31.471 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 995913 ']' 00:04:31.471 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 995913 00:04:31.472 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:31.472 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.472 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 995913 00:04:31.472 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.472 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.472 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 995913' 00:04:31.472 killing process with pid 995913 00:04:31.472 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 995913 00:04:31.472 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 995913 00:04:31.733 00:04:31.733 real 0m2.334s 00:04:31.733 user 0m2.612s 00:04:31.733 sys 0m0.662s 00:04:31.733 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.733 18:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.733 ************************************ 00:04:31.733 END TEST locking_app_on_locked_coremask 00:04:31.733 ************************************ 00:04:31.733 18:20:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:31.733 18:20:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.733 18:20:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.733 18:20:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.733 ************************************ 00:04:31.733 START TEST locking_overlapped_coremask 00:04:31.733 ************************************ 00:04:31.733 18:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:31.733 18:20:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=996291 00:04:31.733 18:20:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 996291 /var/tmp/spdk.sock 00:04:31.733 18:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 996291 ']' 00:04:31.733 18:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.733 18:20:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:31.733 18:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:31.733 18:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.733 18:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:31.733 18:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.994 [2024-10-08 18:20:25.832296] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:31.994 [2024-10-08 18:20:25.832349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996291 ] 00:04:31.994 [2024-10-08 18:20:25.910321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:31.994 [2024-10-08 18:20:25.966771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.994 [2024-10-08 18:20:25.966927] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.994 [2024-10-08 18:20:25.966928] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.938 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:32.938 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:32.938 18:20:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=996603 00:04:32.938 18:20:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 996603 /var/tmp/spdk2.sock 00:04:32.938 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 996603 /var/tmp/spdk2.sock 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 996603 /var/tmp/spdk2.sock 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 996603 ']' 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:32.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.939 18:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.939 [2024-10-08 18:20:26.686835] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:32.939 [2024-10-08 18:20:26.686889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996603 ] 00:04:32.939 [2024-10-08 18:20:26.777513] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 996291 has claimed it. 00:04:32.939 [2024-10-08 18:20:26.777551] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:33.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (996603) - No such process 00:04:33.511 ERROR: process (pid: 996603) is no longer running 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 996291 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 996291 ']' 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 996291 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 996291 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 996291' 00:04:33.511 killing process with pid 996291 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 996291 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 996291 00:04:33.511 00:04:33.511 real 0m1.794s 00:04:33.511 user 0m5.139s 00:04:33.511 sys 0m0.386s 00:04:33.511 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.822 ************************************ 00:04:33.822 END TEST locking_overlapped_coremask 00:04:33.822 ************************************ 00:04:33.822 18:20:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:33.822 18:20:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.822 18:20:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.822 18:20:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.822 ************************************ 00:04:33.822 START TEST locking_overlapped_coremask_via_rpc 00:04:33.822 ************************************ 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=996668 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 996668 /var/tmp/spdk.sock 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 996668 ']' 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.822 18:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.822 [2024-10-08 18:20:27.700918] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:33.822 [2024-10-08 18:20:27.700971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996668 ] 00:04:33.822 [2024-10-08 18:20:27.778847] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:33.822 [2024-10-08 18:20:27.778872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:33.822 [2024-10-08 18:20:27.838576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.822 [2024-10-08 18:20:27.838729] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.822 [2024-10-08 18:20:27.838731] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.764 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.764 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:34.764 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=996998 00:04:34.765 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:34.765 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 996998 /var/tmp/spdk2.sock 00:04:34.765 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 996998 ']' 00:04:34.765 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.765 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.765 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.765 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.765 18:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.765 [2024-10-08 18:20:28.561625] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:34.765 [2024-10-08 18:20:28.561680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996998 ] 00:04:34.765 [2024-10-08 18:20:28.657467] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.765 [2024-10-08 18:20:28.657495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:34.765 [2024-10-08 18:20:28.787100] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:34.765 [2024-10-08 18:20:28.787255] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.765 [2024-10-08 18:20:28.787257] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.338 [2024-10-08 18:20:29.336058] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 996668 has claimed it. 00:04:35.338 request: 00:04:35.338 { 00:04:35.338 "method": "framework_enable_cpumask_locks", 00:04:35.338 "req_id": 1 00:04:35.338 } 00:04:35.338 Got JSON-RPC error response 00:04:35.338 response: 00:04:35.338 { 00:04:35.338 "code": -32603, 00:04:35.338 "message": "Failed to claim CPU core: 2" 00:04:35.338 } 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 996668 /var/tmp/spdk.sock 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 996668 ']' 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.338 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.599 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.599 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.599 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 996998 /var/tmp/spdk2.sock 00:04:35.599 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 996998 ']' 00:04:35.599 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.599 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.599 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.599 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.599 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.860 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.860 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.860 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:35.860 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:35.860 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:35.860 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:35.860 00:04:35.860 real 0m2.077s 00:04:35.860 user 0m0.855s 00:04:35.860 sys 0m0.134s 00:04:35.860 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.860 18:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.860 ************************************ 00:04:35.860 END TEST locking_overlapped_coremask_via_rpc 00:04:35.860 ************************************ 00:04:35.860 18:20:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:35.860 18:20:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 996668 ]] 00:04:35.860 18:20:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 996668 00:04:35.860 18:20:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 996668 ']' 00:04:35.860 18:20:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 996668 00:04:35.860 18:20:29 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:35.860 18:20:29 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:35.860 18:20:29 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 996668 00:04:35.860 18:20:29 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:35.860 18:20:29 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:35.860 18:20:29 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 996668' 00:04:35.860 killing process with pid 996668 00:04:35.860 18:20:29 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 996668 00:04:35.860 18:20:29 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 996668 00:04:36.120 18:20:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 996998 ]] 00:04:36.120 18:20:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 996998 00:04:36.120 18:20:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 996998 ']' 00:04:36.120 18:20:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 996998 00:04:36.120 18:20:30 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:36.120 18:20:30 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.120 18:20:30 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 996998 00:04:36.120 18:20:30 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:36.120 18:20:30 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:36.120 18:20:30 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 996998' 00:04:36.120 killing process with pid 996998 00:04:36.120 18:20:30 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 996998 00:04:36.120 18:20:30 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 996998 00:04:36.381 18:20:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:36.381 18:20:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:36.381 18:20:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 996668 ]] 00:04:36.381 18:20:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 996668 00:04:36.381 18:20:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 996668 ']' 00:04:36.381 18:20:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 996668 00:04:36.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (996668) - No such process 00:04:36.381 18:20:30 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 996668 is not found' 00:04:36.381 Process with pid 996668 is not found 00:04:36.381 18:20:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 996998 ]] 00:04:36.381 18:20:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 996998 00:04:36.381 18:20:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 996998 ']' 00:04:36.381 18:20:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 996998 00:04:36.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (996998) - No such process 00:04:36.381 18:20:30 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 996998 is not found' 00:04:36.381 Process with pid 996998 is not found 00:04:36.381 18:20:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:36.381 00:04:36.381 real 0m16.495s 00:04:36.381 user 0m28.394s 00:04:36.381 sys 0m5.066s 00:04:36.381 18:20:30 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.381 18:20:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.381 ************************************ 00:04:36.381 END TEST cpu_locks 00:04:36.381 ************************************ 00:04:36.381 00:04:36.381 real 0m43.462s 00:04:36.381 user 1m25.793s 00:04:36.381 sys 0m8.469s 00:04:36.381 18:20:30 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.381 18:20:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.381 ************************************ 00:04:36.381 END TEST event 00:04:36.381 ************************************ 00:04:36.381 18:20:30 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:36.381 18:20:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.381 18:20:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.381 18:20:30 -- common/autotest_common.sh@10 -- # set +x 00:04:36.381 ************************************ 00:04:36.381 START TEST thread 00:04:36.381 ************************************ 00:04:36.381 18:20:30 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:36.642 * Looking for test storage... 00:04:36.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:36.642 18:20:30 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.642 18:20:30 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.642 18:20:30 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.642 18:20:30 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.642 18:20:30 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.642 18:20:30 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.642 18:20:30 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.642 18:20:30 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.642 18:20:30 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.642 18:20:30 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.642 18:20:30 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.642 18:20:30 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:36.642 18:20:30 thread -- scripts/common.sh@345 -- # : 1 00:04:36.642 18:20:30 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.642 18:20:30 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.642 18:20:30 thread -- scripts/common.sh@365 -- # decimal 1 00:04:36.642 18:20:30 thread -- scripts/common.sh@353 -- # local d=1 00:04:36.642 18:20:30 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.642 18:20:30 thread -- scripts/common.sh@355 -- # echo 1 00:04:36.642 18:20:30 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.642 18:20:30 thread -- scripts/common.sh@366 -- # decimal 2 00:04:36.642 18:20:30 thread -- scripts/common.sh@353 -- # local d=2 00:04:36.642 18:20:30 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.642 18:20:30 thread -- scripts/common.sh@355 -- # echo 2 00:04:36.642 18:20:30 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.642 18:20:30 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.642 18:20:30 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.642 18:20:30 thread -- scripts/common.sh@368 -- # return 0 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:36.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.642 --rc genhtml_branch_coverage=1 00:04:36.642 --rc genhtml_function_coverage=1 00:04:36.642 --rc genhtml_legend=1 00:04:36.642 --rc geninfo_all_blocks=1 00:04:36.642 --rc geninfo_unexecuted_blocks=1 00:04:36.642 00:04:36.642 ' 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:36.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.642 --rc genhtml_branch_coverage=1 00:04:36.642 --rc genhtml_function_coverage=1 00:04:36.642 --rc genhtml_legend=1 00:04:36.642 --rc geninfo_all_blocks=1 00:04:36.642 --rc geninfo_unexecuted_blocks=1 00:04:36.642 00:04:36.642 ' 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:36.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.642 --rc genhtml_branch_coverage=1 00:04:36.642 --rc genhtml_function_coverage=1 00:04:36.642 --rc genhtml_legend=1 00:04:36.642 --rc geninfo_all_blocks=1 00:04:36.642 --rc geninfo_unexecuted_blocks=1 00:04:36.642 00:04:36.642 ' 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:36.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.642 --rc genhtml_branch_coverage=1 00:04:36.642 --rc genhtml_function_coverage=1 00:04:36.642 --rc genhtml_legend=1 00:04:36.642 --rc geninfo_all_blocks=1 00:04:36.642 --rc geninfo_unexecuted_blocks=1 00:04:36.642 00:04:36.642 ' 00:04:36.642 18:20:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.642 18:20:30 thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.642 ************************************ 00:04:36.642 START TEST thread_poller_perf 00:04:36.642 ************************************ 00:04:36.642 18:20:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:36.642 [2024-10-08 18:20:30.686511] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:36.642 [2024-10-08 18:20:30.686613] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid997448 ] 00:04:36.903 [2024-10-08 18:20:30.769772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.903 [2024-10-08 18:20:30.839869] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.903 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:37.844 [2024-10-08T16:20:31.901Z] ====================================== 00:04:37.844 [2024-10-08T16:20:31.901Z] busy:2409838716 (cyc) 00:04:37.844 [2024-10-08T16:20:31.901Z] total_run_count: 419000 00:04:37.844 [2024-10-08T16:20:31.901Z] tsc_hz: 2400000000 (cyc) 00:04:37.844 [2024-10-08T16:20:31.901Z] ====================================== 00:04:37.844 [2024-10-08T16:20:31.901Z] poller_cost: 5751 (cyc), 2396 (nsec) 00:04:37.844 00:04:37.844 real 0m1.226s 00:04:37.844 user 0m1.129s 00:04:37.844 sys 0m0.091s 00:04:37.844 18:20:31 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.844 18:20:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.844 ************************************ 00:04:37.844 END TEST thread_poller_perf 00:04:37.844 ************************************ 00:04:38.105 18:20:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:38.105 18:20:31 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:38.105 18:20:31 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.105 18:20:31 thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.105 ************************************ 00:04:38.105 START TEST thread_poller_perf 00:04:38.105 ************************************ 00:04:38.105 18:20:31 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:38.105 [2024-10-08 18:20:31.986044] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:38.105 [2024-10-08 18:20:31.986147] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid997797 ] 00:04:38.105 [2024-10-08 18:20:32.065255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.105 [2024-10-08 18:20:32.128328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.105 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:39.491 [2024-10-08T16:20:33.548Z] ====================================== 00:04:39.491 [2024-10-08T16:20:33.548Z] busy:2401268902 (cyc) 00:04:39.491 [2024-10-08T16:20:33.548Z] total_run_count: 5555000 00:04:39.491 [2024-10-08T16:20:33.548Z] tsc_hz: 2400000000 (cyc) 00:04:39.491 [2024-10-08T16:20:33.548Z] ====================================== 00:04:39.491 [2024-10-08T16:20:33.548Z] poller_cost: 432 (cyc), 180 (nsec) 00:04:39.491 00:04:39.491 real 0m1.208s 00:04:39.491 user 0m1.121s 00:04:39.491 sys 0m0.083s 00:04:39.491 18:20:33 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.491 18:20:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.491 ************************************ 00:04:39.491 END TEST thread_poller_perf 00:04:39.491 ************************************ 00:04:39.491 18:20:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:39.491 00:04:39.491 real 0m2.776s 00:04:39.491 user 0m2.425s 00:04:39.491 sys 0m0.364s 00:04:39.491 18:20:33 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.491 18:20:33 thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.491 ************************************ 00:04:39.491 END TEST thread 00:04:39.491 ************************************ 00:04:39.491 18:20:33 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:39.491 18:20:33 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:39.491 18:20:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.491 18:20:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.491 18:20:33 -- common/autotest_common.sh@10 -- # set +x 00:04:39.491 ************************************ 00:04:39.491 START TEST app_cmdline 00:04:39.492 ************************************ 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:39.492 * Looking for test storage... 00:04:39.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.492 18:20:33 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:39.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.492 --rc genhtml_branch_coverage=1 00:04:39.492 --rc genhtml_function_coverage=1 00:04:39.492 --rc genhtml_legend=1 00:04:39.492 --rc geninfo_all_blocks=1 00:04:39.492 --rc geninfo_unexecuted_blocks=1 00:04:39.492 00:04:39.492 ' 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:39.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.492 --rc genhtml_branch_coverage=1 00:04:39.492 --rc genhtml_function_coverage=1 00:04:39.492 --rc genhtml_legend=1 00:04:39.492 --rc geninfo_all_blocks=1 00:04:39.492 --rc geninfo_unexecuted_blocks=1 00:04:39.492 00:04:39.492 ' 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:39.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.492 --rc genhtml_branch_coverage=1 00:04:39.492 --rc genhtml_function_coverage=1 00:04:39.492 --rc genhtml_legend=1 00:04:39.492 --rc geninfo_all_blocks=1 00:04:39.492 --rc geninfo_unexecuted_blocks=1 00:04:39.492 00:04:39.492 ' 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:39.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.492 --rc genhtml_branch_coverage=1 00:04:39.492 --rc genhtml_function_coverage=1 00:04:39.492 --rc genhtml_legend=1 00:04:39.492 --rc geninfo_all_blocks=1 00:04:39.492 --rc geninfo_unexecuted_blocks=1 00:04:39.492 00:04:39.492 ' 00:04:39.492 18:20:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:39.492 18:20:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=998203 00:04:39.492 18:20:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 998203 00:04:39.492 18:20:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 998203 ']' 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.492 18:20:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:39.492 [2024-10-08 18:20:33.545823] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:39.492 [2024-10-08 18:20:33.545894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998203 ] 00:04:39.753 [2024-10-08 18:20:33.627209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.753 [2024-10-08 18:20:33.697120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.325 18:20:34 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.325 18:20:34 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:04:40.325 18:20:34 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:40.587 { 00:04:40.587 "version": "SPDK v25.01-pre git sha1 6f51f621d", 00:04:40.587 "fields": { 00:04:40.587 "major": 25, 00:04:40.587 "minor": 1, 00:04:40.587 "patch": 0, 00:04:40.587 "suffix": "-pre", 00:04:40.587 "commit": "6f51f621d" 00:04:40.587 } 00:04:40.587 } 00:04:40.587 18:20:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:40.587 18:20:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:40.587 18:20:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:40.587 18:20:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:40.587 18:20:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:40.587 18:20:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:40.587 18:20:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.587 18:20:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:40.587 18:20:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:40.587 18:20:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:40.587 18:20:34 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:40.848 request: 00:04:40.848 { 00:04:40.848 "method": "env_dpdk_get_mem_stats", 00:04:40.848 "req_id": 1 00:04:40.848 } 00:04:40.848 Got JSON-RPC error response 00:04:40.848 response: 00:04:40.848 { 00:04:40.848 "code": -32601, 00:04:40.848 "message": "Method not found" 00:04:40.848 } 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.848 18:20:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 998203 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 998203 ']' 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 998203 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 998203 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 998203' 00:04:40.848 killing process with pid 998203 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@969 -- # kill 998203 00:04:40.848 18:20:34 app_cmdline -- common/autotest_common.sh@974 -- # wait 998203 00:04:41.109 00:04:41.109 real 0m1.691s 00:04:41.109 user 0m1.995s 00:04:41.109 sys 0m0.468s 00:04:41.109 18:20:34 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.109 18:20:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:41.109 ************************************ 00:04:41.109 END TEST app_cmdline 00:04:41.109 ************************************ 00:04:41.109 18:20:35 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:41.109 18:20:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.109 18:20:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.109 18:20:35 -- common/autotest_common.sh@10 -- # set +x 00:04:41.109 ************************************ 00:04:41.109 START TEST version 00:04:41.109 ************************************ 00:04:41.109 18:20:35 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:41.109 * Looking for test storage... 00:04:41.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:41.109 18:20:35 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:41.109 18:20:35 version -- common/autotest_common.sh@1681 -- # lcov --version 00:04:41.109 18:20:35 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:41.370 18:20:35 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:41.370 18:20:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.370 18:20:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.370 18:20:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.370 18:20:35 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.370 18:20:35 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.370 18:20:35 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.370 18:20:35 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.370 18:20:35 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.370 18:20:35 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.370 18:20:35 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.370 18:20:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.370 18:20:35 version -- scripts/common.sh@344 -- # case "$op" in 00:04:41.370 18:20:35 version -- scripts/common.sh@345 -- # : 1 00:04:41.370 18:20:35 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.370 18:20:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.370 18:20:35 version -- scripts/common.sh@365 -- # decimal 1 00:04:41.370 18:20:35 version -- scripts/common.sh@353 -- # local d=1 00:04:41.370 18:20:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.370 18:20:35 version -- scripts/common.sh@355 -- # echo 1 00:04:41.370 18:20:35 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.370 18:20:35 version -- scripts/common.sh@366 -- # decimal 2 00:04:41.370 18:20:35 version -- scripts/common.sh@353 -- # local d=2 00:04:41.370 18:20:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.370 18:20:35 version -- scripts/common.sh@355 -- # echo 2 00:04:41.370 18:20:35 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.370 18:20:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.370 18:20:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.370 18:20:35 version -- scripts/common.sh@368 -- # return 0 00:04:41.370 18:20:35 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.370 18:20:35 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:41.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.370 --rc genhtml_branch_coverage=1 00:04:41.370 --rc genhtml_function_coverage=1 00:04:41.370 --rc genhtml_legend=1 00:04:41.370 --rc geninfo_all_blocks=1 00:04:41.370 --rc geninfo_unexecuted_blocks=1 00:04:41.370 00:04:41.370 ' 00:04:41.370 18:20:35 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:41.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.370 --rc genhtml_branch_coverage=1 00:04:41.370 --rc genhtml_function_coverage=1 00:04:41.370 --rc genhtml_legend=1 00:04:41.370 --rc geninfo_all_blocks=1 00:04:41.370 --rc geninfo_unexecuted_blocks=1 00:04:41.370 00:04:41.370 ' 00:04:41.370 18:20:35 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:41.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.370 --rc genhtml_branch_coverage=1 00:04:41.370 --rc genhtml_function_coverage=1 00:04:41.370 --rc genhtml_legend=1 00:04:41.370 --rc geninfo_all_blocks=1 00:04:41.370 --rc geninfo_unexecuted_blocks=1 00:04:41.370 00:04:41.370 ' 00:04:41.370 18:20:35 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:41.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.370 --rc genhtml_branch_coverage=1 00:04:41.370 --rc genhtml_function_coverage=1 00:04:41.370 --rc genhtml_legend=1 00:04:41.370 --rc geninfo_all_blocks=1 00:04:41.370 --rc geninfo_unexecuted_blocks=1 00:04:41.370 00:04:41.370 ' 00:04:41.370 18:20:35 version -- app/version.sh@17 -- # get_header_version major 00:04:41.370 18:20:35 version -- app/version.sh@14 -- # cut -f2 00:04:41.370 18:20:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.370 18:20:35 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.370 18:20:35 version -- app/version.sh@17 -- # major=25 00:04:41.370 18:20:35 version -- app/version.sh@18 -- # get_header_version minor 00:04:41.370 18:20:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.370 18:20:35 version -- app/version.sh@14 -- # cut -f2 00:04:41.371 18:20:35 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.371 18:20:35 version -- app/version.sh@18 -- # minor=1 00:04:41.371 18:20:35 version -- app/version.sh@19 -- # get_header_version patch 00:04:41.371 18:20:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.371 18:20:35 version -- app/version.sh@14 -- # cut -f2 00:04:41.371 18:20:35 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.371 18:20:35 version -- app/version.sh@19 -- # patch=0 00:04:41.371 18:20:35 version -- app/version.sh@20 -- # get_header_version suffix 00:04:41.371 18:20:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.371 18:20:35 version -- app/version.sh@14 -- # cut -f2 00:04:41.371 18:20:35 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.371 18:20:35 version -- app/version.sh@20 -- # suffix=-pre 00:04:41.371 18:20:35 version -- app/version.sh@22 -- # version=25.1 00:04:41.371 18:20:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:41.371 18:20:35 version -- app/version.sh@28 -- # version=25.1rc0 00:04:41.371 18:20:35 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:41.371 18:20:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:41.371 18:20:35 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:41.371 18:20:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:41.371 00:04:41.371 real 0m0.283s 00:04:41.371 user 0m0.163s 00:04:41.371 sys 0m0.161s 00:04:41.371 18:20:35 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.371 18:20:35 version -- common/autotest_common.sh@10 -- # set +x 00:04:41.371 ************************************ 00:04:41.371 END TEST version 00:04:41.371 ************************************ 00:04:41.371 18:20:35 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:41.371 18:20:35 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:41.371 18:20:35 -- spdk/autotest.sh@194 -- # uname -s 00:04:41.371 18:20:35 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:41.371 18:20:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:41.371 18:20:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:41.371 18:20:35 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:41.371 18:20:35 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:04:41.371 18:20:35 -- spdk/autotest.sh@256 -- # timing_exit lib 00:04:41.371 18:20:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.371 18:20:35 -- common/autotest_common.sh@10 -- # set +x 00:04:41.633 18:20:35 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:04:41.633 18:20:35 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:04:41.633 18:20:35 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:04:41.633 18:20:35 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:04:41.633 18:20:35 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:04:41.633 18:20:35 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:04:41.633 18:20:35 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:41.633 18:20:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:41.633 18:20:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.633 18:20:35 -- common/autotest_common.sh@10 -- # set +x 00:04:41.633 ************************************ 00:04:41.633 START TEST nvmf_tcp 00:04:41.633 ************************************ 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:41.633 * Looking for test storage... 00:04:41.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.633 18:20:35 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:41.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.633 --rc genhtml_branch_coverage=1 00:04:41.633 --rc genhtml_function_coverage=1 00:04:41.633 --rc genhtml_legend=1 00:04:41.633 --rc geninfo_all_blocks=1 00:04:41.633 --rc geninfo_unexecuted_blocks=1 00:04:41.633 00:04:41.633 ' 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:41.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.633 --rc genhtml_branch_coverage=1 00:04:41.633 --rc genhtml_function_coverage=1 00:04:41.633 --rc genhtml_legend=1 00:04:41.633 --rc geninfo_all_blocks=1 00:04:41.633 --rc geninfo_unexecuted_blocks=1 00:04:41.633 00:04:41.633 ' 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:41.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.633 --rc genhtml_branch_coverage=1 00:04:41.633 --rc genhtml_function_coverage=1 00:04:41.633 --rc genhtml_legend=1 00:04:41.633 --rc geninfo_all_blocks=1 00:04:41.633 --rc geninfo_unexecuted_blocks=1 00:04:41.633 00:04:41.633 ' 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:41.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.633 --rc genhtml_branch_coverage=1 00:04:41.633 --rc genhtml_function_coverage=1 00:04:41.633 --rc genhtml_legend=1 00:04:41.633 --rc geninfo_all_blocks=1 00:04:41.633 --rc geninfo_unexecuted_blocks=1 00:04:41.633 00:04:41.633 ' 00:04:41.633 18:20:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:41.633 18:20:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:41.633 18:20:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.633 18:20:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.896 ************************************ 00:04:41.896 START TEST nvmf_target_core 00:04:41.896 ************************************ 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:41.896 * Looking for test storage... 00:04:41.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.896 --rc genhtml_branch_coverage=1 00:04:41.896 --rc genhtml_function_coverage=1 00:04:41.896 --rc genhtml_legend=1 00:04:41.896 --rc geninfo_all_blocks=1 00:04:41.896 --rc geninfo_unexecuted_blocks=1 00:04:41.896 00:04:41.896 ' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.896 --rc genhtml_branch_coverage=1 00:04:41.896 --rc genhtml_function_coverage=1 00:04:41.896 --rc genhtml_legend=1 00:04:41.896 --rc geninfo_all_blocks=1 00:04:41.896 --rc geninfo_unexecuted_blocks=1 00:04:41.896 00:04:41.896 ' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.896 --rc genhtml_branch_coverage=1 00:04:41.896 --rc genhtml_function_coverage=1 00:04:41.896 --rc genhtml_legend=1 00:04:41.896 --rc geninfo_all_blocks=1 00:04:41.896 --rc geninfo_unexecuted_blocks=1 00:04:41.896 00:04:41.896 ' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.896 --rc genhtml_branch_coverage=1 00:04:41.896 --rc genhtml_function_coverage=1 00:04:41.896 --rc genhtml_legend=1 00:04:41.896 --rc geninfo_all_blocks=1 00:04:41.896 --rc geninfo_unexecuted_blocks=1 00:04:41.896 00:04:41.896 ' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:41.896 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:41.897 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.897 18:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:42.160 ************************************ 00:04:42.160 START TEST nvmf_abort 00:04:42.160 ************************************ 00:04:42.160 18:20:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:42.160 * Looking for test storage... 00:04:42.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:42.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.160 --rc genhtml_branch_coverage=1 00:04:42.160 --rc genhtml_function_coverage=1 00:04:42.160 --rc genhtml_legend=1 00:04:42.160 --rc geninfo_all_blocks=1 00:04:42.160 --rc geninfo_unexecuted_blocks=1 00:04:42.160 00:04:42.160 ' 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:42.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.160 --rc genhtml_branch_coverage=1 00:04:42.160 --rc genhtml_function_coverage=1 00:04:42.160 --rc genhtml_legend=1 00:04:42.160 --rc geninfo_all_blocks=1 00:04:42.160 --rc geninfo_unexecuted_blocks=1 00:04:42.160 00:04:42.160 ' 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:42.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.160 --rc genhtml_branch_coverage=1 00:04:42.160 --rc genhtml_function_coverage=1 00:04:42.160 --rc genhtml_legend=1 00:04:42.160 --rc geninfo_all_blocks=1 00:04:42.160 --rc geninfo_unexecuted_blocks=1 00:04:42.160 00:04:42.160 ' 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:42.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.160 --rc genhtml_branch_coverage=1 00:04:42.160 --rc genhtml_function_coverage=1 00:04:42.160 --rc genhtml_legend=1 00:04:42.160 --rc geninfo_all_blocks=1 00:04:42.160 --rc geninfo_unexecuted_blocks=1 00:04:42.160 00:04:42.160 ' 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.160 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.161 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:42.421 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:50.565 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:04:50.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:04:50.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:04:50.566 Found net devices under 0000:31:00.0: cvl_0_0 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:04:50.566 Found net devices under 0000:31:00.1: cvl_0_1 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:50.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:50.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:04:50.566 00:04:50.566 --- 10.0.0.2 ping statistics --- 00:04:50.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:50.566 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:50.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:50.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:04:50.566 00:04:50.566 --- 10.0.0.1 ping statistics --- 00:04:50.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:50.566 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1002753 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1002753 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1002753 ']' 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.566 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.566 [2024-10-08 18:20:43.919302] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:04:50.566 [2024-10-08 18:20:43.919368] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:50.566 [2024-10-08 18:20:44.011322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.566 [2024-10-08 18:20:44.105896] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:50.566 [2024-10-08 18:20:44.105969] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:50.566 [2024-10-08 18:20:44.105988] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:50.566 [2024-10-08 18:20:44.105995] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:50.566 [2024-10-08 18:20:44.106002] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:50.566 [2024-10-08 18:20:44.107576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.566 [2024-10-08 18:20:44.107762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.566 [2024-10-08 18:20:44.107877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.827 [2024-10-08 18:20:44.801506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.827 Malloc0 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.827 Delay0 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.827 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.827 [2024-10-08 18:20:44.884773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:51.089 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.089 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:51.089 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.089 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:51.089 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.089 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:51.089 [2024-10-08 18:20:45.026510] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:53.640 Initializing NVMe Controllers 00:04:53.640 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:53.640 controller IO queue size 128 less than required 00:04:53.640 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:53.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:53.640 Initialization complete. Launching workers. 00:04:53.640 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28182 00:04:53.640 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28243, failed to submit 62 00:04:53.640 success 28186, unsuccessful 57, failed 0 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:53.640 rmmod nvme_tcp 00:04:53.640 rmmod nvme_fabrics 00:04:53.640 rmmod nvme_keyring 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1002753 ']' 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1002753 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1002753 ']' 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1002753 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1002753 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1002753' 00:04:53.640 killing process with pid 1002753 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1002753 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1002753 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:53.640 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:55.560 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:55.560 00:04:55.560 real 0m13.620s 00:04:55.560 user 0m14.147s 00:04:55.560 sys 0m6.870s 00:04:55.560 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.560 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:55.560 ************************************ 00:04:55.560 END TEST nvmf_abort 00:04:55.560 ************************************ 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:55.821 ************************************ 00:04:55.821 START TEST nvmf_ns_hotplug_stress 00:04:55.821 ************************************ 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:55.821 * Looking for test storage... 00:04:55.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.821 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.822 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.822 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.822 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:55.822 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:55.822 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.822 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.822 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:55.822 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:55.822 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.822 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:56.083 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.083 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:56.083 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:56.083 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.083 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:56.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.084 --rc genhtml_branch_coverage=1 00:04:56.084 --rc genhtml_function_coverage=1 00:04:56.084 --rc genhtml_legend=1 00:04:56.084 --rc geninfo_all_blocks=1 00:04:56.084 --rc geninfo_unexecuted_blocks=1 00:04:56.084 00:04:56.084 ' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:56.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.084 --rc genhtml_branch_coverage=1 00:04:56.084 --rc genhtml_function_coverage=1 00:04:56.084 --rc genhtml_legend=1 00:04:56.084 --rc geninfo_all_blocks=1 00:04:56.084 --rc geninfo_unexecuted_blocks=1 00:04:56.084 00:04:56.084 ' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:56.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.084 --rc genhtml_branch_coverage=1 00:04:56.084 --rc genhtml_function_coverage=1 00:04:56.084 --rc genhtml_legend=1 00:04:56.084 --rc geninfo_all_blocks=1 00:04:56.084 --rc geninfo_unexecuted_blocks=1 00:04:56.084 00:04:56.084 ' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:56.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.084 --rc genhtml_branch_coverage=1 00:04:56.084 --rc genhtml_function_coverage=1 00:04:56.084 --rc genhtml_legend=1 00:04:56.084 --rc geninfo_all_blocks=1 00:04:56.084 --rc geninfo_unexecuted_blocks=1 00:04:56.084 00:04:56.084 ' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:56.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:56.084 18:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:04.235 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:04.236 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:04.236 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:04.236 Found net devices under 0000:31:00.0: cvl_0_0 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:04.236 Found net devices under 0000:31:00.1: cvl_0_1 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:04.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:04.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.727 ms 00:05:04.236 00:05:04.236 --- 10.0.0.2 ping statistics --- 00:05:04.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:04.236 rtt min/avg/max/mdev = 0.727/0.727/0.727/0.000 ms 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:04.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:04.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:05:04.236 00:05:04.236 --- 10.0.0.1 ping statistics --- 00:05:04.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:04.236 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1007779 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1007779 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1007779 ']' 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.236 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:04.236 [2024-10-08 18:20:57.657062] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:05:04.237 [2024-10-08 18:20:57.657126] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:04.237 [2024-10-08 18:20:57.747829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.237 [2024-10-08 18:20:57.840494] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:04.237 [2024-10-08 18:20:57.840560] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:04.237 [2024-10-08 18:20:57.840569] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:04.237 [2024-10-08 18:20:57.840576] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:04.237 [2024-10-08 18:20:57.840583] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:04.237 [2024-10-08 18:20:57.842159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.237 [2024-10-08 18:20:57.842417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.237 [2024-10-08 18:20:57.842417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.503 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.503 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:04.503 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:04.503 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.503 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:04.503 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:04.503 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:04.503 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:04.766 [2024-10-08 18:20:58.692973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.766 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:05.029 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:05.291 [2024-10-08 18:20:59.102832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:05.291 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:05.291 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:05.552 Malloc0 00:05:05.552 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:05.816 Delay0 00:05:05.816 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.077 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:06.077 NULL1 00:05:06.339 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:06.339 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1008274 00:05:06.339 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:06.339 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.339 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:06.601 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.862 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:06.862 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:06.862 true 00:05:06.862 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:06.862 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.123 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.384 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:07.384 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:07.384 true 00:05:07.644 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:07.644 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.644 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.905 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:07.905 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:07.905 true 00:05:08.166 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:08.166 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.166 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.427 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:08.427 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:08.688 true 00:05:08.688 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:08.688 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.688 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.948 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:08.948 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:09.209 true 00:05:09.209 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:09.209 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.209 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.469 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:09.469 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:09.730 true 00:05:09.731 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:09.731 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.731 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.992 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:09.992 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:10.252 true 00:05:10.252 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:10.252 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.514 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.514 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:10.514 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:10.776 true 00:05:10.776 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:10.776 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.037 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.037 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:11.037 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:11.297 true 00:05:11.297 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:11.297 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.558 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.558 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:11.558 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:11.820 true 00:05:11.820 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:11.820 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.083 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.344 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:12.344 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:12.344 true 00:05:12.344 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:12.344 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.605 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.867 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:12.867 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:12.867 true 00:05:12.867 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:12.867 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.127 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.389 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:13.389 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:13.389 true 00:05:13.389 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:13.389 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.650 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.911 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:13.911 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:13.911 true 00:05:13.911 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:13.911 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.172 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.438 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:14.438 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:14.438 true 00:05:14.705 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:14.705 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.705 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.966 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:14.966 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:15.228 true 00:05:15.228 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:15.228 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.228 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.489 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:15.489 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:15.750 true 00:05:15.751 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:15.751 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.751 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.012 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:16.012 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:16.312 true 00:05:16.312 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:16.312 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.312 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.575 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:16.575 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:16.835 true 00:05:16.835 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:16.835 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.096 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.096 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:17.096 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:17.357 true 00:05:17.357 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:17.357 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.617 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.617 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:17.617 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:17.877 true 00:05:17.877 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:17.877 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.137 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.397 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:18.397 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:18.397 true 00:05:18.397 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:18.397 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.659 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.919 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:18.919 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:18.919 true 00:05:18.919 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:18.919 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.181 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.441 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:19.441 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:19.441 true 00:05:19.441 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:19.441 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.702 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.962 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:19.962 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:19.962 true 00:05:20.223 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:20.223 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.223 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.502 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:20.502 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:20.820 true 00:05:20.820 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:20.820 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.820 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.134 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:21.134 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:21.134 true 00:05:21.134 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:21.134 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.434 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.734 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:21.734 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:21.734 true 00:05:21.734 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:21.734 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.009 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.270 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:22.270 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:22.270 true 00:05:22.270 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:22.270 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.531 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.791 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:22.791 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:22.791 true 00:05:22.791 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:22.791 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.052 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.312 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:23.312 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:23.312 true 00:05:23.312 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:23.312 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.572 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.833 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:23.833 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:23.833 true 00:05:24.093 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:24.093 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.093 18:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.354 18:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:24.354 18:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:24.614 true 00:05:24.614 18:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:24.614 18:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.614 18:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.874 18:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:24.874 18:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:25.134 true 00:05:25.134 18:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:25.134 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.134 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.394 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:25.394 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:25.655 true 00:05:25.655 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:25.655 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.915 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.915 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:25.915 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:26.175 true 00:05:26.175 18:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:26.175 18:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.436 18:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.436 18:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:26.436 18:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:26.696 true 00:05:26.696 18:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:26.696 18:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.956 18:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.956 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:26.956 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:27.217 true 00:05:27.217 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:27.217 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.477 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.738 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:27.738 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:27.738 true 00:05:27.738 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:27.738 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.998 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.259 18:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:28.259 18:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:28.259 true 00:05:28.259 18:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:28.259 18:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.519 18:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.779 18:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:28.779 18:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:28.779 true 00:05:28.779 18:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:28.779 18:21:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.040 18:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.300 18:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:29.300 18:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:29.300 true 00:05:29.560 18:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:29.560 18:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.560 18:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.821 18:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:29.821 18:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:30.081 true 00:05:30.081 18:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:30.081 18:21:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.081 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.341 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:30.341 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:30.602 true 00:05:30.602 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:30.602 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.602 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.862 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:30.862 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:31.123 true 00:05:31.123 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:31.123 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.385 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.385 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:31.385 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:31.645 true 00:05:31.645 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:31.645 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.906 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.906 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:31.906 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:32.167 true 00:05:32.167 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:32.167 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.428 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.428 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:32.428 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:32.689 true 00:05:32.689 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:32.689 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.950 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.210 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:33.210 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:33.210 true 00:05:33.210 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:33.210 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.471 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.732 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:33.732 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:33.732 true 00:05:33.732 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:33.732 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.993 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.253 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:34.253 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:05:34.253 true 00:05:34.513 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:34.513 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.513 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.774 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:05:34.774 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:05:35.035 true 00:05:35.035 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:35.035 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.035 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.296 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:05:35.296 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:05:35.555 true 00:05:35.555 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:35.555 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.555 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.815 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:05:35.815 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:05:36.077 true 00:05:36.077 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:36.077 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.338 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.339 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:05:36.339 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:05:36.599 true 00:05:36.599 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:36.599 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.860 Initializing NVMe Controllers 00:05:36.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:36.860 Controller IO queue size 128, less than required. 00:05:36.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:36.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:36.860 Initialization complete. Launching workers. 00:05:36.860 ======================================================== 00:05:36.860 Latency(us) 00:05:36.860 Device Information : IOPS MiB/s Average min max 00:05:36.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31178.60 15.22 4105.31 1126.18 8031.77 00:05:36.860 ======================================================== 00:05:36.860 Total : 31178.60 15.22 4105.31 1126.18 8031.77 00:05:36.860 00:05:36.860 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.860 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:05:36.860 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:05:37.122 true 00:05:37.122 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1008274 00:05:37.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1008274) - No such process 00:05:37.122 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1008274 00:05:37.122 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.384 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.384 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:37.384 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:37.384 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:37.384 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.384 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:37.662 null0 00:05:37.662 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:37.662 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.662 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:37.923 null1 00:05:37.923 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:37.923 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.923 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:37.923 null2 00:05:37.923 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:37.923 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.923 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:38.183 null3 00:05:38.183 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.183 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.183 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:38.444 null4 00:05:38.444 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.444 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.444 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:38.444 null5 00:05:38.444 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.444 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.444 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:38.705 null6 00:05:38.705 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.705 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.705 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:38.966 null7 00:05:38.966 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.966 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.966 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1015461 1015463 1015466 1015469 1015472 1015475 1015477 1015480 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.967 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:38.967 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.230 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.492 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.493 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.753 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.754 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.016 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.016 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.016 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.016 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.016 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.016 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.016 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.016 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.016 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.016 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.278 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.541 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.806 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.067 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.067 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.067 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.067 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.067 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.067 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.067 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.067 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.329 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.593 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.854 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.855 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.117 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.117 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.117 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.117 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.117 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.117 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.117 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.117 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.117 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.117 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.117 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.378 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.637 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:42.897 rmmod nvme_tcp 00:05:42.897 rmmod nvme_fabrics 00:05:42.897 rmmod nvme_keyring 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1007779 ']' 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1007779 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1007779 ']' 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1007779 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1007779 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1007779' 00:05:42.897 killing process with pid 1007779 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1007779 00:05:42.897 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1007779 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:43.158 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.071 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:45.071 00:05:45.071 real 0m49.368s 00:05:45.071 user 3m20.525s 00:05:45.071 sys 0m17.694s 00:05:45.071 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.071 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:45.071 ************************************ 00:05:45.071 END TEST nvmf_ns_hotplug_stress 00:05:45.071 ************************************ 00:05:45.071 18:21:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:45.071 18:21:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:45.071 18:21:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.071 18:21:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:45.333 ************************************ 00:05:45.333 START TEST nvmf_delete_subsystem 00:05:45.333 ************************************ 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:45.333 * Looking for test storage... 00:05:45.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:45.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.333 --rc genhtml_branch_coverage=1 00:05:45.333 --rc genhtml_function_coverage=1 00:05:45.333 --rc genhtml_legend=1 00:05:45.333 --rc geninfo_all_blocks=1 00:05:45.333 --rc geninfo_unexecuted_blocks=1 00:05:45.333 00:05:45.333 ' 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:45.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.333 --rc genhtml_branch_coverage=1 00:05:45.333 --rc genhtml_function_coverage=1 00:05:45.333 --rc genhtml_legend=1 00:05:45.333 --rc geninfo_all_blocks=1 00:05:45.333 --rc geninfo_unexecuted_blocks=1 00:05:45.333 00:05:45.333 ' 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:45.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.333 --rc genhtml_branch_coverage=1 00:05:45.333 --rc genhtml_function_coverage=1 00:05:45.333 --rc genhtml_legend=1 00:05:45.333 --rc geninfo_all_blocks=1 00:05:45.333 --rc geninfo_unexecuted_blocks=1 00:05:45.333 00:05:45.333 ' 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:45.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.333 --rc genhtml_branch_coverage=1 00:05:45.333 --rc genhtml_function_coverage=1 00:05:45.333 --rc genhtml_legend=1 00:05:45.333 --rc geninfo_all_blocks=1 00:05:45.333 --rc geninfo_unexecuted_blocks=1 00:05:45.333 00:05:45.333 ' 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.333 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:45.334 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:53.474 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:53.474 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:53.474 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:53.475 Found net devices under 0000:31:00.0: cvl_0_0 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:53.475 Found net devices under 0000:31:00.1: cvl_0_1 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:53.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:53.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:05:53.475 00:05:53.475 --- 10.0.0.2 ping statistics --- 00:05:53.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.475 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:53.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:53.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:05:53.475 00:05:53.475 --- 10.0.0.1 ping statistics --- 00:05:53.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.475 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:53.475 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1020911 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1020911 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1020911 ']' 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.475 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.475 [2024-10-08 18:21:47.077994] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:05:53.475 [2024-10-08 18:21:47.078060] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.475 [2024-10-08 18:21:47.164747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.475 [2024-10-08 18:21:47.258901] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.475 [2024-10-08 18:21:47.258963] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.475 [2024-10-08 18:21:47.258971] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.475 [2024-10-08 18:21:47.258988] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.475 [2024-10-08 18:21:47.258995] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.475 [2024-10-08 18:21:47.260279] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.475 [2024-10-08 18:21:47.260283] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 [2024-10-08 18:21:47.946365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 [2024-10-08 18:21:47.970645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 NULL1 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 Delay0 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.048 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.048 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1020965 00:05:54.048 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:54.048 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:54.048 [2024-10-08 18:21:48.087616] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:55.961 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:56.222 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.222 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 [2024-10-08 18:21:50.320293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c851b0 is same with the state(6) to be set 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 starting I/O failed: -6 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 [2024-10-08 18:21:50.321316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ecc00d480 is same with the state(6) to be set 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Read completed with error (sct=0, sc=8) 00:05:56.483 Write completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Write completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Write completed with error (sct=0, sc=8) 00:05:56.484 Write completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Write completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Write completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Write completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Write completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Write completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Write completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Read completed with error (sct=0, sc=8) 00:05:56.484 Write completed with error (sct=0, sc=8) 00:05:57.424 [2024-10-08 18:21:51.271012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c866b0 is same with the state(6) to be set 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 [2024-10-08 18:21:51.324697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c85390 is same with the state(6) to be set 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 [2024-10-08 18:21:51.324932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ecc00d7b0 is same with the state(6) to be set 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 [2024-10-08 18:21:51.325037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84fd0 is same with the state(6) to be set 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Write completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.424 Read completed with error (sct=0, sc=8) 00:05:57.425 Write completed with error (sct=0, sc=8) 00:05:57.425 Write completed with error (sct=0, sc=8) 00:05:57.425 Read completed with error (sct=0, sc=8) 00:05:57.425 Read completed with error (sct=0, sc=8) 00:05:57.425 [2024-10-08 18:21:51.325427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ecc00cff0 is same with the state(6) to be set 00:05:57.425 Initializing NVMe Controllers 00:05:57.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:57.425 Controller IO queue size 128, less than required. 00:05:57.425 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:57.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:57.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:57.425 Initialization complete. Launching workers. 00:05:57.425 ======================================================== 00:05:57.425 Latency(us) 00:05:57.425 Device Information : IOPS MiB/s Average min max 00:05:57.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.76 0.08 906466.73 367.51 2003394.47 00:05:57.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.34 0.08 978162.89 260.39 2003037.60 00:05:57.425 ======================================================== 00:05:57.425 Total : 328.11 0.16 941067.45 260.39 2003394.47 00:05:57.425 00:05:57.425 [2024-10-08 18:21:51.325991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c866b0 (9): Bad file descriptor 00:05:57.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:57.425 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.425 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:57.425 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1020965 00:05:57.425 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1020965 00:05:57.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1020965) - No such process 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1020965 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1020965 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1020965 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.996 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.997 [2024-10-08 18:21:51.856396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:57.997 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.997 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.997 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.997 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.997 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.997 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1021804 00:05:57.997 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:57.997 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:57.997 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1021804 00:05:57.997 18:21:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:57.997 [2024-10-08 18:21:51.941262] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:58.567 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.567 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1021804 00:05:58.567 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.137 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.137 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1021804 00:05:59.137 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.397 18:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.397 18:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1021804 00:05:59.397 18:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.998 18:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.998 18:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1021804 00:05:59.998 18:21:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:00.568 18:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:00.569 18:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1021804 00:06:00.569 18:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:01.140 18:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.140 18:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1021804 00:06:01.140 18:21:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:01.140 Initializing NVMe Controllers 00:06:01.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:01.140 Controller IO queue size 128, less than required. 00:06:01.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:01.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:01.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:01.140 Initialization complete. Launching workers. 00:06:01.140 ======================================================== 00:06:01.140 Latency(us) 00:06:01.140 Device Information : IOPS MiB/s Average min max 00:06:01.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001555.74 1000094.57 1004189.49 00:06:01.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002513.04 1000196.58 1006899.33 00:06:01.140 ======================================================== 00:06:01.140 Total : 256.00 0.12 1002034.39 1000094.57 1006899.33 00:06:01.140 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1021804 00:06:01.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1021804) - No such process 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1021804 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:01.400 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:01.400 rmmod nvme_tcp 00:06:01.400 rmmod nvme_fabrics 00:06:01.400 rmmod nvme_keyring 00:06:01.660 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:01.660 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:01.660 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:01.660 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1020911 ']' 00:06:01.660 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1020911 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1020911 ']' 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1020911 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1020911 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1020911' 00:06:01.661 killing process with pid 1020911 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1020911 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1020911 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:01.661 18:21:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:04.205 00:06:04.205 real 0m18.621s 00:06:04.205 user 0m31.056s 00:06:04.205 sys 0m7.039s 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.205 ************************************ 00:06:04.205 END TEST nvmf_delete_subsystem 00:06:04.205 ************************************ 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:04.205 ************************************ 00:06:04.205 START TEST nvmf_host_management 00:06:04.205 ************************************ 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:04.205 * Looking for test storage... 00:06:04.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.205 18:21:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.205 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.206 --rc genhtml_branch_coverage=1 00:06:04.206 --rc genhtml_function_coverage=1 00:06:04.206 --rc genhtml_legend=1 00:06:04.206 --rc geninfo_all_blocks=1 00:06:04.206 --rc geninfo_unexecuted_blocks=1 00:06:04.206 00:06:04.206 ' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.206 --rc genhtml_branch_coverage=1 00:06:04.206 --rc genhtml_function_coverage=1 00:06:04.206 --rc genhtml_legend=1 00:06:04.206 --rc geninfo_all_blocks=1 00:06:04.206 --rc geninfo_unexecuted_blocks=1 00:06:04.206 00:06:04.206 ' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.206 --rc genhtml_branch_coverage=1 00:06:04.206 --rc genhtml_function_coverage=1 00:06:04.206 --rc genhtml_legend=1 00:06:04.206 --rc geninfo_all_blocks=1 00:06:04.206 --rc geninfo_unexecuted_blocks=1 00:06:04.206 00:06:04.206 ' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.206 --rc genhtml_branch_coverage=1 00:06:04.206 --rc genhtml_function_coverage=1 00:06:04.206 --rc genhtml_legend=1 00:06:04.206 --rc geninfo_all_blocks=1 00:06:04.206 --rc geninfo_unexecuted_blocks=1 00:06:04.206 00:06:04.206 ' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:04.206 18:21:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:12.352 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.352 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:12.353 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:12.353 Found net devices under 0000:31:00.0: cvl_0_0 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:12.353 Found net devices under 0000:31:00.1: cvl_0_1 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:12.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:12.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:06:12.353 00:06:12.353 --- 10.0.0.2 ping statistics --- 00:06:12.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.353 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:12.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:12.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:06:12.353 00:06:12.353 --- 10.0.0.1 ping statistics --- 00:06:12.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.353 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1026901 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1026901 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1026901 ']' 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.353 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.353 [2024-10-08 18:22:05.823321] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:06:12.353 [2024-10-08 18:22:05.823382] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.353 [2024-10-08 18:22:05.917438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.353 [2024-10-08 18:22:06.013092] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.353 [2024-10-08 18:22:06.013152] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.353 [2024-10-08 18:22:06.013161] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.353 [2024-10-08 18:22:06.013168] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.353 [2024-10-08 18:22:06.013174] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.353 [2024-10-08 18:22:06.015105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.353 [2024-10-08 18:22:06.015247] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.353 [2024-10-08 18:22:06.015386] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.353 [2024-10-08 18:22:06.015387] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.616 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.616 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:12.616 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:12.616 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:12.616 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.877 [2024-10-08 18:22:06.704136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.877 Malloc0 00:06:12.877 [2024-10-08 18:22:06.773612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1027111 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1027111 /var/tmp/bdevperf.sock 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1027111 ']' 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:12.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:12.877 { 00:06:12.877 "params": { 00:06:12.877 "name": "Nvme$subsystem", 00:06:12.877 "trtype": "$TEST_TRANSPORT", 00:06:12.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:12.877 "adrfam": "ipv4", 00:06:12.877 "trsvcid": "$NVMF_PORT", 00:06:12.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:12.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:12.877 "hdgst": ${hdgst:-false}, 00:06:12.877 "ddgst": ${ddgst:-false} 00:06:12.877 }, 00:06:12.877 "method": "bdev_nvme_attach_controller" 00:06:12.877 } 00:06:12.877 EOF 00:06:12.877 )") 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:12.877 18:22:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:12.877 "params": { 00:06:12.877 "name": "Nvme0", 00:06:12.877 "trtype": "tcp", 00:06:12.877 "traddr": "10.0.0.2", 00:06:12.877 "adrfam": "ipv4", 00:06:12.877 "trsvcid": "4420", 00:06:12.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:12.877 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:12.877 "hdgst": false, 00:06:12.877 "ddgst": false 00:06:12.877 }, 00:06:12.878 "method": "bdev_nvme_attach_controller" 00:06:12.878 }' 00:06:12.878 [2024-10-08 18:22:06.882573] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:06:12.878 [2024-10-08 18:22:06.882639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027111 ] 00:06:13.138 [2024-10-08 18:22:06.966903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.138 [2024-10-08 18:22:07.063780] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.400 Running I/O for 10 seconds... 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=522 00:06:13.974 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 522 -ge 100 ']' 00:06:13.975 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:13.975 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:13.975 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:13.975 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:13.975 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.975 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.975 [2024-10-08 18:22:07.805274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.805652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d6b0 is same with the state(6) to be set 00:06:13.975 [2024-10-08 18:22:07.809988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.975 [2024-10-08 18:22:07.810423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.975 [2024-10-08 18:22:07.810434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.976 [2024-10-08 18:22:07.810494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:13.976 [2024-10-08 18:22:07.810900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.810984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.810996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.811003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.811012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.811020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.811030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.811037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.976 [2024-10-08 18:22:07.811047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.976 [2024-10-08 18:22:07.811055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.811064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.977 [2024-10-08 18:22:07.811071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.811082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.977 [2024-10-08 18:22:07.811089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.811099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.977 [2024-10-08 18:22:07.811106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.811116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.977 [2024-10-08 18:22:07.811123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.811134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.977 [2024-10-08 18:22:07.811145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.811155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.977 [2024-10-08 18:22:07.811163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.811172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.977 [2024-10-08 18:22:07.811179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.811188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.977 [2024-10-08 18:22:07.811196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.811206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.977 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.977 [2024-10-08 18:22:07.811220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.811314] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20aff60 was disconnected and freed. reset controller. 00:06:13.977 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.977 [2024-10-08 18:22:07.812554] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:13.977 task offset: 76800 on job bdev=Nvme0n1 fails 00:06:13.977 00:06:13.977 Latency(us) 00:06:13.977 [2024-10-08T16:22:08.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:13.977 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:13.977 Job: Nvme0n1 ended in about 0.42 seconds with error 00:06:13.977 Verification LBA range: start 0x0 length 0x400 00:06:13.977 Nvme0n1 : 0.42 1431.51 89.47 152.69 0.00 39146.73 1733.97 37573.97 00:06:13.977 [2024-10-08T16:22:08.034Z] =================================================================================================================== 00:06:13.977 [2024-10-08T16:22:08.034Z] Total : 1431.51 89.47 152.69 0.00 39146.73 1733.97 37573.97 00:06:13.977 [2024-10-08 18:22:07.814786] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.977 [2024-10-08 18:22:07.814826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e97100 (9): Bad file descriptor 00:06:13.977 [2024-10-08 18:22:07.816791] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:13.977 [2024-10-08 18:22:07.816914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:13.977 [2024-10-08 18:22:07.816944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.977 [2024-10-08 18:22:07.816961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:13.977 [2024-10-08 18:22:07.816969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:13.977 [2024-10-08 18:22:07.816984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:13.977 [2024-10-08 18:22:07.816992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e97100 00:06:13.977 [2024-10-08 18:22:07.817014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e97100 (9): Bad file descriptor 00:06:13.977 [2024-10-08 18:22:07.817028] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:06:13.977 [2024-10-08 18:22:07.817035] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:06:13.977 [2024-10-08 18:22:07.817045] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:06:13.977 [2024-10-08 18:22:07.817060] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:13.977 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.977 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:14.917 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1027111 00:06:14.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1027111) - No such process 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:14.918 { 00:06:14.918 "params": { 00:06:14.918 "name": "Nvme$subsystem", 00:06:14.918 "trtype": "$TEST_TRANSPORT", 00:06:14.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:14.918 "adrfam": "ipv4", 00:06:14.918 "trsvcid": "$NVMF_PORT", 00:06:14.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:14.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:14.918 "hdgst": ${hdgst:-false}, 00:06:14.918 "ddgst": ${ddgst:-false} 00:06:14.918 }, 00:06:14.918 "method": "bdev_nvme_attach_controller" 00:06:14.918 } 00:06:14.918 EOF 00:06:14.918 )") 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:14.918 18:22:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:14.918 "params": { 00:06:14.918 "name": "Nvme0", 00:06:14.918 "trtype": "tcp", 00:06:14.918 "traddr": "10.0.0.2", 00:06:14.918 "adrfam": "ipv4", 00:06:14.918 "trsvcid": "4420", 00:06:14.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:14.918 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:14.918 "hdgst": false, 00:06:14.918 "ddgst": false 00:06:14.918 }, 00:06:14.918 "method": "bdev_nvme_attach_controller" 00:06:14.918 }' 00:06:14.918 [2024-10-08 18:22:08.882526] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:06:14.918 [2024-10-08 18:22:08.882579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027469 ] 00:06:14.918 [2024-10-08 18:22:08.962968] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.179 [2024-10-08 18:22:09.027787] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.440 Running I/O for 1 seconds... 00:06:16.381 1807.00 IOPS, 112.94 MiB/s 00:06:16.381 Latency(us) 00:06:16.381 [2024-10-08T16:22:10.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:16.381 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:16.381 Verification LBA range: start 0x0 length 0x400 00:06:16.381 Nvme0n1 : 1.02 1833.05 114.57 0.00 0.00 34218.64 3249.49 34734.08 00:06:16.381 [2024-10-08T16:22:10.438Z] =================================================================================================================== 00:06:16.381 [2024-10-08T16:22:10.438Z] Total : 1833.05 114.57 0.00 0.00 34218.64 3249.49 34734.08 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:16.642 rmmod nvme_tcp 00:06:16.642 rmmod nvme_fabrics 00:06:16.642 rmmod nvme_keyring 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1026901 ']' 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1026901 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1026901 ']' 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1026901 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1026901 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1026901' 00:06:16.642 killing process with pid 1026901 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1026901 00:06:16.642 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1026901 00:06:16.904 [2024-10-08 18:22:10.715276] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.904 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.822 18:22:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:18.822 18:22:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:18.822 00:06:18.822 real 0m14.992s 00:06:18.822 user 0m23.698s 00:06:18.822 sys 0m6.968s 00:06:18.822 18:22:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.822 18:22:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:18.822 ************************************ 00:06:18.822 END TEST nvmf_host_management 00:06:18.822 ************************************ 00:06:18.822 18:22:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:18.822 18:22:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:18.822 18:22:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.822 18:22:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.083 ************************************ 00:06:19.083 START TEST nvmf_lvol 00:06:19.083 ************************************ 00:06:19.083 18:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:19.083 * Looking for test storage... 00:06:19.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.083 --rc genhtml_branch_coverage=1 00:06:19.083 --rc genhtml_function_coverage=1 00:06:19.083 --rc genhtml_legend=1 00:06:19.083 --rc geninfo_all_blocks=1 00:06:19.083 --rc geninfo_unexecuted_blocks=1 00:06:19.083 00:06:19.083 ' 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.083 --rc genhtml_branch_coverage=1 00:06:19.083 --rc genhtml_function_coverage=1 00:06:19.083 --rc genhtml_legend=1 00:06:19.083 --rc geninfo_all_blocks=1 00:06:19.083 --rc geninfo_unexecuted_blocks=1 00:06:19.083 00:06:19.083 ' 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.083 --rc genhtml_branch_coverage=1 00:06:19.083 --rc genhtml_function_coverage=1 00:06:19.083 --rc genhtml_legend=1 00:06:19.083 --rc geninfo_all_blocks=1 00:06:19.083 --rc geninfo_unexecuted_blocks=1 00:06:19.083 00:06:19.083 ' 00:06:19.083 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.083 --rc genhtml_branch_coverage=1 00:06:19.083 --rc genhtml_function_coverage=1 00:06:19.083 --rc genhtml_legend=1 00:06:19.083 --rc geninfo_all_blocks=1 00:06:19.084 --rc geninfo_unexecuted_blocks=1 00:06:19.084 00:06:19.084 ' 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.084 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.345 18:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:27.689 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.689 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:27.689 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:27.689 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:27.689 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:27.689 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:27.689 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:27.689 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:27.690 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:27.690 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:27.690 Found net devices under 0000:31:00.0: cvl_0_0 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:27.690 Found net devices under 0000:31:00.1: cvl_0_1 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:27.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:06:27.690 00:06:27.690 --- 10.0.0.2 ping statistics --- 00:06:27.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.690 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:27.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:06:27.690 00:06:27.690 --- 10.0.0.1 ping statistics --- 00:06:27.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.690 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:27.690 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1032239 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1032239 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1032239 ']' 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.691 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:27.691 [2024-10-08 18:22:20.852465] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:06:27.691 [2024-10-08 18:22:20.852530] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.691 [2024-10-08 18:22:20.941310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.691 [2024-10-08 18:22:21.036751] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.691 [2024-10-08 18:22:21.036815] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.691 [2024-10-08 18:22:21.036826] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.691 [2024-10-08 18:22:21.036833] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.691 [2024-10-08 18:22:21.036840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.691 [2024-10-08 18:22:21.038198] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.691 [2024-10-08 18:22:21.038431] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.691 [2024-10-08 18:22:21.038434] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.691 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.691 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:27.691 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:27.691 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.691 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:27.691 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.691 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:27.952 [2024-10-08 18:22:21.874517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.952 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:28.212 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:28.212 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:28.473 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:28.473 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:28.734 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:28.734 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bc1edc11-8449-4ce4-a58d-2511ccad61b5 00:06:28.734 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bc1edc11-8449-4ce4-a58d-2511ccad61b5 lvol 20 00:06:28.994 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b83c09ae-f6c4-4918-aaae-28ff94e409e9 00:06:28.994 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:29.254 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b83c09ae-f6c4-4918-aaae-28ff94e409e9 00:06:29.254 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:29.515 [2024-10-08 18:22:23.441304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.515 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:29.775 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1032939 00:06:29.775 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:29.775 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:30.714 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b83c09ae-f6c4-4918-aaae-28ff94e409e9 MY_SNAPSHOT 00:06:30.975 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f536301c-0575-4f9a-bd46-858335acada3 00:06:30.975 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b83c09ae-f6c4-4918-aaae-28ff94e409e9 30 00:06:31.235 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f536301c-0575-4f9a-bd46-858335acada3 MY_CLONE 00:06:31.495 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=afdbb8c2-ff3b-4989-9501-6d31c643e33b 00:06:31.495 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate afdbb8c2-ff3b-4989-9501-6d31c643e33b 00:06:31.755 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1032939 00:06:39.892 Initializing NVMe Controllers 00:06:39.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:39.892 Controller IO queue size 128, less than required. 00:06:39.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:39.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:39.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:39.893 Initialization complete. Launching workers. 00:06:39.893 ======================================================== 00:06:39.893 Latency(us) 00:06:39.893 Device Information : IOPS MiB/s Average min max 00:06:39.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15948.10 62.30 8026.25 2141.29 52587.64 00:06:39.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17165.20 67.05 7458.42 791.42 59089.89 00:06:39.893 ======================================================== 00:06:39.893 Total : 33113.30 129.35 7731.90 791.42 59089.89 00:06:39.893 00:06:39.893 18:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:40.153 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b83c09ae-f6c4-4918-aaae-28ff94e409e9 00:06:40.414 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc1edc11-8449-4ce4-a58d-2511ccad61b5 00:06:40.414 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:40.414 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:40.414 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:40.414 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:40.414 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:40.414 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:40.414 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:40.414 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:40.414 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:40.414 rmmod nvme_tcp 00:06:40.674 rmmod nvme_fabrics 00:06:40.674 rmmod nvme_keyring 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1032239 ']' 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1032239 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1032239 ']' 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1032239 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1032239 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1032239' 00:06:40.674 killing process with pid 1032239 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1032239 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1032239 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:40.674 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:06:40.675 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:40.675 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:06:40.675 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:40.675 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:40.675 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.675 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.675 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:43.220 00:06:43.220 real 0m23.895s 00:06:43.220 user 1m4.287s 00:06:43.220 sys 0m8.491s 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:43.220 ************************************ 00:06:43.220 END TEST nvmf_lvol 00:06:43.220 ************************************ 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.220 ************************************ 00:06:43.220 START TEST nvmf_lvs_grow 00:06:43.220 ************************************ 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:43.220 * Looking for test storage... 00:06:43.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:06:43.220 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:43.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.220 --rc genhtml_branch_coverage=1 00:06:43.220 --rc genhtml_function_coverage=1 00:06:43.220 --rc genhtml_legend=1 00:06:43.220 --rc geninfo_all_blocks=1 00:06:43.220 --rc geninfo_unexecuted_blocks=1 00:06:43.220 00:06:43.220 ' 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:43.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.220 --rc genhtml_branch_coverage=1 00:06:43.220 --rc genhtml_function_coverage=1 00:06:43.220 --rc genhtml_legend=1 00:06:43.220 --rc geninfo_all_blocks=1 00:06:43.220 --rc geninfo_unexecuted_blocks=1 00:06:43.220 00:06:43.220 ' 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:43.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.220 --rc genhtml_branch_coverage=1 00:06:43.220 --rc genhtml_function_coverage=1 00:06:43.220 --rc genhtml_legend=1 00:06:43.220 --rc geninfo_all_blocks=1 00:06:43.220 --rc geninfo_unexecuted_blocks=1 00:06:43.220 00:06:43.220 ' 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:43.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.220 --rc genhtml_branch_coverage=1 00:06:43.220 --rc genhtml_function_coverage=1 00:06:43.220 --rc genhtml_legend=1 00:06:43.220 --rc geninfo_all_blocks=1 00:06:43.220 --rc geninfo_unexecuted_blocks=1 00:06:43.220 00:06:43.220 ' 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.220 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:43.221 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:51.361 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:51.362 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:51.362 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:51.362 Found net devices under 0000:31:00.0: cvl_0_0 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:51.362 Found net devices under 0000:31:00.1: cvl_0_1 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:51.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:06:51.362 00:06:51.362 --- 10.0.0.2 ping statistics --- 00:06:51.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.362 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:06:51.362 00:06:51.362 --- 10.0.0.1 ping statistics --- 00:06:51.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.362 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1039373 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1039373 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1039373 ']' 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.362 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.362 [2024-10-08 18:22:44.881356] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:06:51.362 [2024-10-08 18:22:44.881423] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.363 [2024-10-08 18:22:44.968210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.363 [2024-10-08 18:22:45.061202] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.363 [2024-10-08 18:22:45.061262] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.363 [2024-10-08 18:22:45.061271] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.363 [2024-10-08 18:22:45.061279] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.363 [2024-10-08 18:22:45.061285] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.363 [2024-10-08 18:22:45.062124] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:51.936 [2024-10-08 18:22:45.911822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.936 ************************************ 00:06:51.936 START TEST lvs_grow_clean 00:06:51.936 ************************************ 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:51.936 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:52.198 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:52.198 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:52.459 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:06:52.459 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:06:52.459 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:52.720 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:52.720 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:52.720 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 lvol 150 00:06:52.720 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9f57a043-7ada-482a-a898-2e1c79c6ac5a 00:06:52.720 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:52.720 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:52.981 [2024-10-08 18:22:46.896646] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:52.981 [2024-10-08 18:22:46.896722] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:52.981 true 00:06:52.981 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:06:52.981 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:53.242 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:53.242 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:53.242 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9f57a043-7ada-482a-a898-2e1c79c6ac5a 00:06:53.503 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:53.764 [2024-10-08 18:22:47.610949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1040085 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1040085 /var/tmp/bdevperf.sock 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1040085 ']' 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:53.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.764 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:54.026 [2024-10-08 18:22:47.851134] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:06:54.026 [2024-10-08 18:22:47.851202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040085 ] 00:06:54.026 [2024-10-08 18:22:47.934224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.026 [2024-10-08 18:22:48.028242] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.968 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.968 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:06:54.968 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:55.228 Nvme0n1 00:06:55.228 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:55.228 [ 00:06:55.228 { 00:06:55.228 "name": "Nvme0n1", 00:06:55.228 "aliases": [ 00:06:55.228 "9f57a043-7ada-482a-a898-2e1c79c6ac5a" 00:06:55.228 ], 00:06:55.228 "product_name": "NVMe disk", 00:06:55.228 "block_size": 4096, 00:06:55.228 "num_blocks": 38912, 00:06:55.228 "uuid": "9f57a043-7ada-482a-a898-2e1c79c6ac5a", 00:06:55.228 "numa_id": 0, 00:06:55.228 "assigned_rate_limits": { 00:06:55.228 "rw_ios_per_sec": 0, 00:06:55.228 "rw_mbytes_per_sec": 0, 00:06:55.228 "r_mbytes_per_sec": 0, 00:06:55.228 "w_mbytes_per_sec": 0 00:06:55.228 }, 00:06:55.228 "claimed": false, 00:06:55.228 "zoned": false, 00:06:55.228 "supported_io_types": { 00:06:55.228 "read": true, 00:06:55.228 "write": true, 00:06:55.228 "unmap": true, 00:06:55.228 "flush": true, 00:06:55.228 "reset": true, 00:06:55.228 "nvme_admin": true, 00:06:55.228 "nvme_io": true, 00:06:55.228 "nvme_io_md": false, 00:06:55.228 "write_zeroes": true, 00:06:55.228 "zcopy": false, 00:06:55.228 "get_zone_info": false, 00:06:55.228 "zone_management": false, 00:06:55.228 "zone_append": false, 00:06:55.228 "compare": true, 00:06:55.228 "compare_and_write": true, 00:06:55.228 "abort": true, 00:06:55.228 "seek_hole": false, 00:06:55.228 "seek_data": false, 00:06:55.228 "copy": true, 00:06:55.228 "nvme_iov_md": false 00:06:55.228 }, 00:06:55.228 "memory_domains": [ 00:06:55.228 { 00:06:55.228 "dma_device_id": "system", 00:06:55.228 "dma_device_type": 1 00:06:55.228 } 00:06:55.228 ], 00:06:55.228 "driver_specific": { 00:06:55.228 "nvme": [ 00:06:55.228 { 00:06:55.228 "trid": { 00:06:55.228 "trtype": "TCP", 00:06:55.228 "adrfam": "IPv4", 00:06:55.228 "traddr": "10.0.0.2", 00:06:55.228 "trsvcid": "4420", 00:06:55.228 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:55.228 }, 00:06:55.228 "ctrlr_data": { 00:06:55.228 "cntlid": 1, 00:06:55.228 "vendor_id": "0x8086", 00:06:55.228 "model_number": "SPDK bdev Controller", 00:06:55.228 "serial_number": "SPDK0", 00:06:55.228 "firmware_revision": "25.01", 00:06:55.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:55.228 "oacs": { 00:06:55.228 "security": 0, 00:06:55.228 "format": 0, 00:06:55.228 "firmware": 0, 00:06:55.228 "ns_manage": 0 00:06:55.228 }, 00:06:55.228 "multi_ctrlr": true, 00:06:55.228 "ana_reporting": false 00:06:55.228 }, 00:06:55.228 "vs": { 00:06:55.228 "nvme_version": "1.3" 00:06:55.228 }, 00:06:55.228 "ns_data": { 00:06:55.228 "id": 1, 00:06:55.228 "can_share": true 00:06:55.228 } 00:06:55.228 } 00:06:55.228 ], 00:06:55.228 "mp_policy": "active_passive" 00:06:55.228 } 00:06:55.228 } 00:06:55.228 ] 00:06:55.228 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1040377 00:06:55.228 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:55.228 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:55.488 Running I/O for 10 seconds... 00:06:56.427 Latency(us) 00:06:56.427 [2024-10-08T16:22:50.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.427 Nvme0n1 : 1.00 25184.00 98.38 0.00 0.00 0.00 0.00 0.00 00:06:56.427 [2024-10-08T16:22:50.484Z] =================================================================================================================== 00:06:56.427 [2024-10-08T16:22:50.484Z] Total : 25184.00 98.38 0.00 0.00 0.00 0.00 0.00 00:06:56.427 00:06:57.368 18:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:06:57.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.368 Nvme0n1 : 2.00 25311.00 98.87 0.00 0.00 0.00 0.00 0.00 00:06:57.368 [2024-10-08T16:22:51.425Z] =================================================================================================================== 00:06:57.368 [2024-10-08T16:22:51.425Z] Total : 25311.00 98.87 0.00 0.00 0.00 0.00 0.00 00:06:57.368 00:06:57.368 true 00:06:57.629 18:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:06:57.629 18:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:57.629 18:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:57.629 18:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:57.629 18:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1040377 00:06:58.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.568 Nvme0n1 : 3.00 25385.67 99.16 0.00 0.00 0.00 0.00 0.00 00:06:58.568 [2024-10-08T16:22:52.625Z] =================================================================================================================== 00:06:58.568 [2024-10-08T16:22:52.625Z] Total : 25385.67 99.16 0.00 0.00 0.00 0.00 0.00 00:06:58.568 00:06:59.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.508 Nvme0n1 : 4.00 25439.25 99.37 0.00 0.00 0.00 0.00 0.00 00:06:59.508 [2024-10-08T16:22:53.565Z] =================================================================================================================== 00:06:59.508 [2024-10-08T16:22:53.565Z] Total : 25439.25 99.37 0.00 0.00 0.00 0.00 0.00 00:06:59.508 00:07:00.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.449 Nvme0n1 : 5.00 25471.20 99.50 0.00 0.00 0.00 0.00 0.00 00:07:00.449 [2024-10-08T16:22:54.506Z] =================================================================================================================== 00:07:00.449 [2024-10-08T16:22:54.506Z] Total : 25471.20 99.50 0.00 0.00 0.00 0.00 0.00 00:07:00.449 00:07:01.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.388 Nvme0n1 : 6.00 25492.67 99.58 0.00 0.00 0.00 0.00 0.00 00:07:01.388 [2024-10-08T16:22:55.445Z] =================================================================================================================== 00:07:01.388 [2024-10-08T16:22:55.445Z] Total : 25492.67 99.58 0.00 0.00 0.00 0.00 0.00 00:07:01.388 00:07:02.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.329 Nvme0n1 : 7.00 25516.57 99.67 0.00 0.00 0.00 0.00 0.00 00:07:02.329 [2024-10-08T16:22:56.386Z] =================================================================================================================== 00:07:02.329 [2024-10-08T16:22:56.386Z] Total : 25516.57 99.67 0.00 0.00 0.00 0.00 0.00 00:07:02.329 00:07:03.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.713 Nvme0n1 : 8.00 25534.50 99.74 0.00 0.00 0.00 0.00 0.00 00:07:03.713 [2024-10-08T16:22:57.770Z] =================================================================================================================== 00:07:03.713 [2024-10-08T16:22:57.770Z] Total : 25534.50 99.74 0.00 0.00 0.00 0.00 0.00 00:07:03.713 00:07:04.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.654 Nvme0n1 : 9.00 25548.22 99.80 0.00 0.00 0.00 0.00 0.00 00:07:04.654 [2024-10-08T16:22:58.711Z] =================================================================================================================== 00:07:04.654 [2024-10-08T16:22:58.711Z] Total : 25548.22 99.80 0.00 0.00 0.00 0.00 0.00 00:07:04.654 00:07:05.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.596 Nvme0n1 : 10.00 25566.00 99.87 0.00 0.00 0.00 0.00 0.00 00:07:05.596 [2024-10-08T16:22:59.653Z] =================================================================================================================== 00:07:05.596 [2024-10-08T16:22:59.653Z] Total : 25566.00 99.87 0.00 0.00 0.00 0.00 0.00 00:07:05.596 00:07:05.596 00:07:05.596 Latency(us) 00:07:05.596 [2024-10-08T16:22:59.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.596 Nvme0n1 : 10.00 25562.17 99.85 0.00 0.00 5003.79 2498.56 10485.76 00:07:05.596 [2024-10-08T16:22:59.653Z] =================================================================================================================== 00:07:05.596 [2024-10-08T16:22:59.653Z] Total : 25562.17 99.85 0.00 0.00 5003.79 2498.56 10485.76 00:07:05.596 { 00:07:05.596 "results": [ 00:07:05.596 { 00:07:05.596 "job": "Nvme0n1", 00:07:05.596 "core_mask": "0x2", 00:07:05.596 "workload": "randwrite", 00:07:05.596 "status": "finished", 00:07:05.596 "queue_depth": 128, 00:07:05.596 "io_size": 4096, 00:07:05.596 "runtime": 10.004043, 00:07:05.596 "iops": 25562.165216602927, 00:07:05.596 "mibps": 99.85220787735518, 00:07:05.596 "io_failed": 0, 00:07:05.596 "io_timeout": 0, 00:07:05.596 "avg_latency_us": 5003.788470062241, 00:07:05.596 "min_latency_us": 2498.56, 00:07:05.596 "max_latency_us": 10485.76 00:07:05.596 } 00:07:05.596 ], 00:07:05.596 "core_count": 1 00:07:05.596 } 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1040085 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1040085 ']' 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1040085 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040085 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040085' 00:07:05.596 killing process with pid 1040085 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1040085 00:07:05.596 Received shutdown signal, test time was about 10.000000 seconds 00:07:05.596 00:07:05.596 Latency(us) 00:07:05.596 [2024-10-08T16:22:59.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.596 [2024-10-08T16:22:59.653Z] =================================================================================================================== 00:07:05.596 [2024-10-08T16:22:59.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1040085 00:07:05.596 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.857 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:05.857 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:07:05.857 18:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:06.117 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:06.117 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:06.117 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:06.377 [2024-10-08 18:23:00.228468] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:07:06.377 request: 00:07:06.377 { 00:07:06.377 "uuid": "ae6a1a44-ddeb-4720-baba-6e2639bcd944", 00:07:06.377 "method": "bdev_lvol_get_lvstores", 00:07:06.377 "req_id": 1 00:07:06.377 } 00:07:06.377 Got JSON-RPC error response 00:07:06.377 response: 00:07:06.377 { 00:07:06.377 "code": -19, 00:07:06.377 "message": "No such device" 00:07:06.377 } 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.377 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.638 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:06.638 aio_bdev 00:07:06.638 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9f57a043-7ada-482a-a898-2e1c79c6ac5a 00:07:06.638 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=9f57a043-7ada-482a-a898-2e1c79c6ac5a 00:07:06.638 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:06.638 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:06.638 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:06.638 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:06.638 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:06.899 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9f57a043-7ada-482a-a898-2e1c79c6ac5a -t 2000 00:07:06.899 [ 00:07:06.899 { 00:07:06.899 "name": "9f57a043-7ada-482a-a898-2e1c79c6ac5a", 00:07:06.899 "aliases": [ 00:07:06.899 "lvs/lvol" 00:07:06.899 ], 00:07:06.899 "product_name": "Logical Volume", 00:07:06.899 "block_size": 4096, 00:07:06.899 "num_blocks": 38912, 00:07:06.899 "uuid": "9f57a043-7ada-482a-a898-2e1c79c6ac5a", 00:07:06.899 "assigned_rate_limits": { 00:07:06.899 "rw_ios_per_sec": 0, 00:07:06.899 "rw_mbytes_per_sec": 0, 00:07:06.899 "r_mbytes_per_sec": 0, 00:07:06.899 "w_mbytes_per_sec": 0 00:07:06.899 }, 00:07:06.899 "claimed": false, 00:07:06.899 "zoned": false, 00:07:06.899 "supported_io_types": { 00:07:06.899 "read": true, 00:07:06.899 "write": true, 00:07:06.899 "unmap": true, 00:07:06.899 "flush": false, 00:07:06.899 "reset": true, 00:07:06.899 "nvme_admin": false, 00:07:06.899 "nvme_io": false, 00:07:06.899 "nvme_io_md": false, 00:07:06.899 "write_zeroes": true, 00:07:06.899 "zcopy": false, 00:07:06.899 "get_zone_info": false, 00:07:06.899 "zone_management": false, 00:07:06.899 "zone_append": false, 00:07:06.899 "compare": false, 00:07:06.899 "compare_and_write": false, 00:07:06.899 "abort": false, 00:07:06.899 "seek_hole": true, 00:07:06.899 "seek_data": true, 00:07:06.899 "copy": false, 00:07:06.899 "nvme_iov_md": false 00:07:06.899 }, 00:07:06.899 "driver_specific": { 00:07:06.899 "lvol": { 00:07:06.899 "lvol_store_uuid": "ae6a1a44-ddeb-4720-baba-6e2639bcd944", 00:07:06.899 "base_bdev": "aio_bdev", 00:07:06.899 "thin_provision": false, 00:07:06.899 "num_allocated_clusters": 38, 00:07:06.899 "snapshot": false, 00:07:06.899 "clone": false, 00:07:06.899 "esnap_clone": false 00:07:06.899 } 00:07:06.899 } 00:07:06.899 } 00:07:06.899 ] 00:07:06.899 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:06.899 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:07:06.899 18:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:07.160 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:07.160 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:07:07.160 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:07.420 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:07.420 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9f57a043-7ada-482a-a898-2e1c79c6ac5a 00:07:07.420 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae6a1a44-ddeb-4720-baba-6e2639bcd944 00:07:07.680 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.940 00:07:07.940 real 0m15.815s 00:07:07.940 user 0m15.556s 00:07:07.940 sys 0m1.387s 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:07.940 ************************************ 00:07:07.940 END TEST lvs_grow_clean 00:07:07.940 ************************************ 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.940 ************************************ 00:07:07.940 START TEST lvs_grow_dirty 00:07:07.940 ************************************ 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.940 18:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:08.200 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:08.201 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:08.201 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ae748900-5563-4b96-9adf-49311b7f7321 00:07:08.201 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:08.201 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:08.461 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:08.461 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:08.461 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ae748900-5563-4b96-9adf-49311b7f7321 lvol 150 00:07:08.721 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e9419c7c-50b4-4924-8a4e-ef176a4e67f0 00:07:08.721 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.721 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:08.721 [2024-10-08 18:23:02.748677] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:08.721 [2024-10-08 18:23:02.748719] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:08.721 true 00:07:08.721 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:08.721 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:08.981 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:08.981 18:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.241 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9419c7c-50b4-4924-8a4e-ef176a4e67f0 00:07:09.241 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:09.501 [2024-10-08 18:23:03.406557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.501 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.761 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:09.761 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1043178 00:07:09.761 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.761 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1043178 /var/tmp/bdevperf.sock 00:07:09.761 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1043178 ']' 00:07:09.761 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.761 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.761 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.761 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.761 18:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:09.761 [2024-10-08 18:23:03.621964] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:07:09.761 [2024-10-08 18:23:03.622031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043178 ] 00:07:09.761 [2024-10-08 18:23:03.704341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.761 [2024-10-08 18:23:03.758320] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.699 18:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.699 18:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:10.699 18:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:10.699 Nvme0n1 00:07:10.699 18:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:10.959 [ 00:07:10.959 { 00:07:10.959 "name": "Nvme0n1", 00:07:10.959 "aliases": [ 00:07:10.959 "e9419c7c-50b4-4924-8a4e-ef176a4e67f0" 00:07:10.959 ], 00:07:10.959 "product_name": "NVMe disk", 00:07:10.959 "block_size": 4096, 00:07:10.959 "num_blocks": 38912, 00:07:10.959 "uuid": "e9419c7c-50b4-4924-8a4e-ef176a4e67f0", 00:07:10.959 "numa_id": 0, 00:07:10.959 "assigned_rate_limits": { 00:07:10.959 "rw_ios_per_sec": 0, 00:07:10.959 "rw_mbytes_per_sec": 0, 00:07:10.959 "r_mbytes_per_sec": 0, 00:07:10.959 "w_mbytes_per_sec": 0 00:07:10.959 }, 00:07:10.959 "claimed": false, 00:07:10.959 "zoned": false, 00:07:10.959 "supported_io_types": { 00:07:10.959 "read": true, 00:07:10.959 "write": true, 00:07:10.959 "unmap": true, 00:07:10.959 "flush": true, 00:07:10.959 "reset": true, 00:07:10.959 "nvme_admin": true, 00:07:10.959 "nvme_io": true, 00:07:10.959 "nvme_io_md": false, 00:07:10.959 "write_zeroes": true, 00:07:10.959 "zcopy": false, 00:07:10.959 "get_zone_info": false, 00:07:10.959 "zone_management": false, 00:07:10.959 "zone_append": false, 00:07:10.959 "compare": true, 00:07:10.959 "compare_and_write": true, 00:07:10.959 "abort": true, 00:07:10.959 "seek_hole": false, 00:07:10.959 "seek_data": false, 00:07:10.959 "copy": true, 00:07:10.959 "nvme_iov_md": false 00:07:10.959 }, 00:07:10.959 "memory_domains": [ 00:07:10.959 { 00:07:10.959 "dma_device_id": "system", 00:07:10.959 "dma_device_type": 1 00:07:10.959 } 00:07:10.959 ], 00:07:10.959 "driver_specific": { 00:07:10.959 "nvme": [ 00:07:10.959 { 00:07:10.959 "trid": { 00:07:10.959 "trtype": "TCP", 00:07:10.959 "adrfam": "IPv4", 00:07:10.959 "traddr": "10.0.0.2", 00:07:10.959 "trsvcid": "4420", 00:07:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:10.959 }, 00:07:10.959 "ctrlr_data": { 00:07:10.959 "cntlid": 1, 00:07:10.959 "vendor_id": "0x8086", 00:07:10.959 "model_number": "SPDK bdev Controller", 00:07:10.959 "serial_number": "SPDK0", 00:07:10.959 "firmware_revision": "25.01", 00:07:10.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:10.959 "oacs": { 00:07:10.959 "security": 0, 00:07:10.959 "format": 0, 00:07:10.959 "firmware": 0, 00:07:10.959 "ns_manage": 0 00:07:10.959 }, 00:07:10.959 "multi_ctrlr": true, 00:07:10.959 "ana_reporting": false 00:07:10.959 }, 00:07:10.959 "vs": { 00:07:10.959 "nvme_version": "1.3" 00:07:10.959 }, 00:07:10.959 "ns_data": { 00:07:10.959 "id": 1, 00:07:10.959 "can_share": true 00:07:10.959 } 00:07:10.959 } 00:07:10.959 ], 00:07:10.959 "mp_policy": "active_passive" 00:07:10.959 } 00:07:10.959 } 00:07:10.959 ] 00:07:10.960 18:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1043518 00:07:10.960 18:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:10.960 18:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:10.960 Running I/O for 10 seconds... 00:07:11.899 Latency(us) 00:07:11.899 [2024-10-08T16:23:05.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.899 Nvme0n1 : 1.00 25109.00 98.08 0.00 0.00 0.00 0.00 0.00 00:07:11.899 [2024-10-08T16:23:05.956Z] =================================================================================================================== 00:07:11.899 [2024-10-08T16:23:05.956Z] Total : 25109.00 98.08 0.00 0.00 0.00 0.00 0.00 00:07:11.899 00:07:12.838 18:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:13.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.098 Nvme0n1 : 2.00 25319.00 98.90 0.00 0.00 0.00 0.00 0.00 00:07:13.098 [2024-10-08T16:23:07.155Z] =================================================================================================================== 00:07:13.098 [2024-10-08T16:23:07.155Z] Total : 25319.00 98.90 0.00 0.00 0.00 0.00 0.00 00:07:13.098 00:07:13.098 true 00:07:13.098 18:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:13.098 18:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:13.357 18:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:13.357 18:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:13.357 18:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1043518 00:07:13.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.926 Nvme0n1 : 3.00 25372.33 99.11 0.00 0.00 0.00 0.00 0.00 00:07:13.926 [2024-10-08T16:23:07.984Z] =================================================================================================================== 00:07:13.927 [2024-10-08T16:23:07.984Z] Total : 25372.33 99.11 0.00 0.00 0.00 0.00 0.00 00:07:13.927 00:07:15.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.305 Nvme0n1 : 4.00 25425.00 99.32 0.00 0.00 0.00 0.00 0.00 00:07:15.305 [2024-10-08T16:23:09.362Z] =================================================================================================================== 00:07:15.305 [2024-10-08T16:23:09.362Z] Total : 25425.00 99.32 0.00 0.00 0.00 0.00 0.00 00:07:15.305 00:07:16.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.244 Nvme0n1 : 5.00 25458.60 99.45 0.00 0.00 0.00 0.00 0.00 00:07:16.244 [2024-10-08T16:23:10.301Z] =================================================================================================================== 00:07:16.244 [2024-10-08T16:23:10.301Z] Total : 25458.60 99.45 0.00 0.00 0.00 0.00 0.00 00:07:16.244 00:07:17.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.182 Nvme0n1 : 6.00 25482.17 99.54 0.00 0.00 0.00 0.00 0.00 00:07:17.182 [2024-10-08T16:23:11.239Z] =================================================================================================================== 00:07:17.182 [2024-10-08T16:23:11.240Z] Total : 25482.17 99.54 0.00 0.00 0.00 0.00 0.00 00:07:17.183 00:07:18.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.125 Nvme0n1 : 7.00 25507.00 99.64 0.00 0.00 0.00 0.00 0.00 00:07:18.125 [2024-10-08T16:23:12.182Z] =================================================================================================================== 00:07:18.125 [2024-10-08T16:23:12.182Z] Total : 25507.00 99.64 0.00 0.00 0.00 0.00 0.00 00:07:18.125 00:07:19.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.066 Nvme0n1 : 8.00 25526.50 99.71 0.00 0.00 0.00 0.00 0.00 00:07:19.066 [2024-10-08T16:23:13.123Z] =================================================================================================================== 00:07:19.066 [2024-10-08T16:23:13.123Z] Total : 25526.50 99.71 0.00 0.00 0.00 0.00 0.00 00:07:19.066 00:07:20.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.007 Nvme0n1 : 9.00 25541.44 99.77 0.00 0.00 0.00 0.00 0.00 00:07:20.007 [2024-10-08T16:23:14.064Z] =================================================================================================================== 00:07:20.007 [2024-10-08T16:23:14.064Z] Total : 25541.44 99.77 0.00 0.00 0.00 0.00 0.00 00:07:20.007 00:07:20.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.950 Nvme0n1 : 10.00 25553.70 99.82 0.00 0.00 0.00 0.00 0.00 00:07:20.950 [2024-10-08T16:23:15.007Z] =================================================================================================================== 00:07:20.950 [2024-10-08T16:23:15.007Z] Total : 25553.70 99.82 0.00 0.00 0.00 0.00 0.00 00:07:20.950 00:07:20.950 00:07:20.950 Latency(us) 00:07:20.950 [2024-10-08T16:23:15.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.950 Nvme0n1 : 10.00 25555.64 99.83 0.00 0.00 5005.84 2976.43 11468.80 00:07:20.950 [2024-10-08T16:23:15.007Z] =================================================================================================================== 00:07:20.950 [2024-10-08T16:23:15.007Z] Total : 25555.64 99.83 0.00 0.00 5005.84 2976.43 11468.80 00:07:20.950 { 00:07:20.950 "results": [ 00:07:20.950 { 00:07:20.950 "job": "Nvme0n1", 00:07:20.950 "core_mask": "0x2", 00:07:20.950 "workload": "randwrite", 00:07:20.950 "status": "finished", 00:07:20.950 "queue_depth": 128, 00:07:20.950 "io_size": 4096, 00:07:20.950 "runtime": 10.004249, 00:07:20.950 "iops": 25555.641407965755, 00:07:20.950 "mibps": 99.82672424986623, 00:07:20.950 "io_failed": 0, 00:07:20.950 "io_timeout": 0, 00:07:20.950 "avg_latency_us": 5005.836553093567, 00:07:20.950 "min_latency_us": 2976.4266666666667, 00:07:20.950 "max_latency_us": 11468.8 00:07:20.950 } 00:07:20.950 ], 00:07:20.950 "core_count": 1 00:07:20.950 } 00:07:20.950 18:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1043178 00:07:20.950 18:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1043178 ']' 00:07:20.950 18:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1043178 00:07:20.950 18:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:20.950 18:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.950 18:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1043178 00:07:21.211 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:21.211 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:21.211 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1043178' 00:07:21.211 killing process with pid 1043178 00:07:21.211 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1043178 00:07:21.211 Received shutdown signal, test time was about 10.000000 seconds 00:07:21.211 00:07:21.211 Latency(us) 00:07:21.211 [2024-10-08T16:23:15.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.211 [2024-10-08T16:23:15.268Z] =================================================================================================================== 00:07:21.211 [2024-10-08T16:23:15.268Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:21.211 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1043178 00:07:21.211 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.471 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1039373 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1039373 00:07:21.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1039373 Killed "${NVMF_APP[@]}" "$@" 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1045552 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1045552 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1045552 ']' 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.733 18:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:21.995 [2024-10-08 18:23:15.844624] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:07:21.995 [2024-10-08 18:23:15.844682] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.995 [2024-10-08 18:23:15.927355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.995 [2024-10-08 18:23:15.982533] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.995 [2024-10-08 18:23:15.982566] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.995 [2024-10-08 18:23:15.982572] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.995 [2024-10-08 18:23:15.982576] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.995 [2024-10-08 18:23:15.982580] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.995 [2024-10-08 18:23:15.983067] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.567 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.567 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:22.567 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:22.567 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.567 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:22.829 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.829 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.829 [2024-10-08 18:23:16.808538] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:22.829 [2024-10-08 18:23:16.808615] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:22.829 [2024-10-08 18:23:16.808636] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:22.829 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:22.829 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e9419c7c-50b4-4924-8a4e-ef176a4e67f0 00:07:22.829 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e9419c7c-50b4-4924-8a4e-ef176a4e67f0 00:07:22.829 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:22.829 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:22.829 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:22.829 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:22.829 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:23.278 18:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9419c7c-50b4-4924-8a4e-ef176a4e67f0 -t 2000 00:07:23.278 [ 00:07:23.278 { 00:07:23.278 "name": "e9419c7c-50b4-4924-8a4e-ef176a4e67f0", 00:07:23.278 "aliases": [ 00:07:23.278 "lvs/lvol" 00:07:23.278 ], 00:07:23.278 "product_name": "Logical Volume", 00:07:23.278 "block_size": 4096, 00:07:23.278 "num_blocks": 38912, 00:07:23.278 "uuid": "e9419c7c-50b4-4924-8a4e-ef176a4e67f0", 00:07:23.278 "assigned_rate_limits": { 00:07:23.278 "rw_ios_per_sec": 0, 00:07:23.279 "rw_mbytes_per_sec": 0, 00:07:23.279 "r_mbytes_per_sec": 0, 00:07:23.279 "w_mbytes_per_sec": 0 00:07:23.279 }, 00:07:23.279 "claimed": false, 00:07:23.279 "zoned": false, 00:07:23.279 "supported_io_types": { 00:07:23.279 "read": true, 00:07:23.279 "write": true, 00:07:23.279 "unmap": true, 00:07:23.279 "flush": false, 00:07:23.279 "reset": true, 00:07:23.279 "nvme_admin": false, 00:07:23.279 "nvme_io": false, 00:07:23.279 "nvme_io_md": false, 00:07:23.279 "write_zeroes": true, 00:07:23.279 "zcopy": false, 00:07:23.279 "get_zone_info": false, 00:07:23.279 "zone_management": false, 00:07:23.279 "zone_append": false, 00:07:23.279 "compare": false, 00:07:23.279 "compare_and_write": false, 00:07:23.279 "abort": false, 00:07:23.279 "seek_hole": true, 00:07:23.279 "seek_data": true, 00:07:23.279 "copy": false, 00:07:23.279 "nvme_iov_md": false 00:07:23.279 }, 00:07:23.279 "driver_specific": { 00:07:23.279 "lvol": { 00:07:23.279 "lvol_store_uuid": "ae748900-5563-4b96-9adf-49311b7f7321", 00:07:23.279 "base_bdev": "aio_bdev", 00:07:23.279 "thin_provision": false, 00:07:23.279 "num_allocated_clusters": 38, 00:07:23.279 "snapshot": false, 00:07:23.279 "clone": false, 00:07:23.279 "esnap_clone": false 00:07:23.279 } 00:07:23.279 } 00:07:23.279 } 00:07:23.279 ] 00:07:23.279 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:23.279 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:23.280 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:23.280 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:23.280 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:23.280 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:23.544 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:23.544 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:23.805 [2024-10-08 18:23:17.633121] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:23.805 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:23.805 request: 00:07:23.805 { 00:07:23.805 "uuid": "ae748900-5563-4b96-9adf-49311b7f7321", 00:07:23.805 "method": "bdev_lvol_get_lvstores", 00:07:23.805 "req_id": 1 00:07:23.805 } 00:07:23.805 Got JSON-RPC error response 00:07:23.805 response: 00:07:23.805 { 00:07:23.805 "code": -19, 00:07:23.805 "message": "No such device" 00:07:23.805 } 00:07:24.065 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:24.065 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.065 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.065 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.065 18:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:24.065 aio_bdev 00:07:24.065 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e9419c7c-50b4-4924-8a4e-ef176a4e67f0 00:07:24.065 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e9419c7c-50b4-4924-8a4e-ef176a4e67f0 00:07:24.065 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:24.065 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:24.065 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:24.065 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:24.065 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:24.325 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9419c7c-50b4-4924-8a4e-ef176a4e67f0 -t 2000 00:07:24.325 [ 00:07:24.325 { 00:07:24.325 "name": "e9419c7c-50b4-4924-8a4e-ef176a4e67f0", 00:07:24.325 "aliases": [ 00:07:24.325 "lvs/lvol" 00:07:24.325 ], 00:07:24.325 "product_name": "Logical Volume", 00:07:24.325 "block_size": 4096, 00:07:24.325 "num_blocks": 38912, 00:07:24.325 "uuid": "e9419c7c-50b4-4924-8a4e-ef176a4e67f0", 00:07:24.325 "assigned_rate_limits": { 00:07:24.325 "rw_ios_per_sec": 0, 00:07:24.325 "rw_mbytes_per_sec": 0, 00:07:24.325 "r_mbytes_per_sec": 0, 00:07:24.325 "w_mbytes_per_sec": 0 00:07:24.325 }, 00:07:24.325 "claimed": false, 00:07:24.325 "zoned": false, 00:07:24.325 "supported_io_types": { 00:07:24.325 "read": true, 00:07:24.325 "write": true, 00:07:24.325 "unmap": true, 00:07:24.325 "flush": false, 00:07:24.325 "reset": true, 00:07:24.325 "nvme_admin": false, 00:07:24.325 "nvme_io": false, 00:07:24.325 "nvme_io_md": false, 00:07:24.325 "write_zeroes": true, 00:07:24.325 "zcopy": false, 00:07:24.325 "get_zone_info": false, 00:07:24.325 "zone_management": false, 00:07:24.325 "zone_append": false, 00:07:24.325 "compare": false, 00:07:24.325 "compare_and_write": false, 00:07:24.325 "abort": false, 00:07:24.325 "seek_hole": true, 00:07:24.325 "seek_data": true, 00:07:24.325 "copy": false, 00:07:24.325 "nvme_iov_md": false 00:07:24.325 }, 00:07:24.325 "driver_specific": { 00:07:24.325 "lvol": { 00:07:24.325 "lvol_store_uuid": "ae748900-5563-4b96-9adf-49311b7f7321", 00:07:24.325 "base_bdev": "aio_bdev", 00:07:24.325 "thin_provision": false, 00:07:24.325 "num_allocated_clusters": 38, 00:07:24.325 "snapshot": false, 00:07:24.325 "clone": false, 00:07:24.325 "esnap_clone": false 00:07:24.325 } 00:07:24.325 } 00:07:24.325 } 00:07:24.325 ] 00:07:24.325 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:24.325 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:24.325 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:24.585 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:24.585 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:24.585 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:24.845 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:24.845 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e9419c7c-50b4-4924-8a4e-ef176a4e67f0 00:07:24.845 18:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae748900-5563-4b96-9adf-49311b7f7321 00:07:25.104 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.365 00:07:25.365 real 0m17.406s 00:07:25.365 user 0m45.666s 00:07:25.365 sys 0m3.000s 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.365 ************************************ 00:07:25.365 END TEST lvs_grow_dirty 00:07:25.365 ************************************ 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:25.365 nvmf_trace.0 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:25.365 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:25.365 rmmod nvme_tcp 00:07:25.365 rmmod nvme_fabrics 00:07:25.365 rmmod nvme_keyring 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1045552 ']' 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1045552 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1045552 ']' 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1045552 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1045552 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1045552' 00:07:25.627 killing process with pid 1045552 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1045552 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1045552 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.627 18:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:28.174 00:07:28.174 real 0m44.829s 00:07:28.174 user 1m7.632s 00:07:28.174 sys 0m10.681s 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:28.174 ************************************ 00:07:28.174 END TEST nvmf_lvs_grow 00:07:28.174 ************************************ 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.174 ************************************ 00:07:28.174 START TEST nvmf_bdev_io_wait 00:07:28.174 ************************************ 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:28.174 * Looking for test storage... 00:07:28.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:28.174 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:28.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.175 --rc genhtml_branch_coverage=1 00:07:28.175 --rc genhtml_function_coverage=1 00:07:28.175 --rc genhtml_legend=1 00:07:28.175 --rc geninfo_all_blocks=1 00:07:28.175 --rc geninfo_unexecuted_blocks=1 00:07:28.175 00:07:28.175 ' 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:28.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.175 --rc genhtml_branch_coverage=1 00:07:28.175 --rc genhtml_function_coverage=1 00:07:28.175 --rc genhtml_legend=1 00:07:28.175 --rc geninfo_all_blocks=1 00:07:28.175 --rc geninfo_unexecuted_blocks=1 00:07:28.175 00:07:28.175 ' 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:28.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.175 --rc genhtml_branch_coverage=1 00:07:28.175 --rc genhtml_function_coverage=1 00:07:28.175 --rc genhtml_legend=1 00:07:28.175 --rc geninfo_all_blocks=1 00:07:28.175 --rc geninfo_unexecuted_blocks=1 00:07:28.175 00:07:28.175 ' 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:28.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.175 --rc genhtml_branch_coverage=1 00:07:28.175 --rc genhtml_function_coverage=1 00:07:28.175 --rc genhtml_legend=1 00:07:28.175 --rc geninfo_all_blocks=1 00:07:28.175 --rc geninfo_unexecuted_blocks=1 00:07:28.175 00:07:28.175 ' 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.175 18:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:28.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.175 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.176 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.176 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:28.176 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:28.176 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:28.176 18:23:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:36.325 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:36.325 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:36.325 Found net devices under 0000:31:00.0: cvl_0_0 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:36.325 Found net devices under 0000:31:00.1: cvl_0_1 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:36.325 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:36.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:07:36.326 00:07:36.326 --- 10.0.0.2 ping statistics --- 00:07:36.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.326 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:07:36.326 00:07:36.326 --- 10.0.0.1 ping statistics --- 00:07:36.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.326 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1050698 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1050698 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1050698 ']' 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.326 18:23:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.326 [2024-10-08 18:23:29.793234] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:07:36.326 [2024-10-08 18:23:29.793302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.326 [2024-10-08 18:23:29.883046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.326 [2024-10-08 18:23:29.981004] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.326 [2024-10-08 18:23:29.981065] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.326 [2024-10-08 18:23:29.981074] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.326 [2024-10-08 18:23:29.981085] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.326 [2024-10-08 18:23:29.981091] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.326 [2024-10-08 18:23:29.983485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.326 [2024-10-08 18:23:29.983651] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.326 [2024-10-08 18:23:29.983808] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.326 [2024-10-08 18:23:29.983808] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.588 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.588 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:36.588 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:36.588 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.588 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 [2024-10-08 18:23:30.742306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 Malloc0 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 [2024-10-08 18:23:30.820679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1051050 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1051052 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:36.849 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:36.849 { 00:07:36.849 "params": { 00:07:36.849 "name": "Nvme$subsystem", 00:07:36.849 "trtype": "$TEST_TRANSPORT", 00:07:36.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.850 "adrfam": "ipv4", 00:07:36.850 "trsvcid": "$NVMF_PORT", 00:07:36.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.850 "hdgst": ${hdgst:-false}, 00:07:36.850 "ddgst": ${ddgst:-false} 00:07:36.850 }, 00:07:36.850 "method": "bdev_nvme_attach_controller" 00:07:36.850 } 00:07:36.850 EOF 00:07:36.850 )") 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1051054 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:36.850 { 00:07:36.850 "params": { 00:07:36.850 "name": "Nvme$subsystem", 00:07:36.850 "trtype": "$TEST_TRANSPORT", 00:07:36.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.850 "adrfam": "ipv4", 00:07:36.850 "trsvcid": "$NVMF_PORT", 00:07:36.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.850 "hdgst": ${hdgst:-false}, 00:07:36.850 "ddgst": ${ddgst:-false} 00:07:36.850 }, 00:07:36.850 "method": "bdev_nvme_attach_controller" 00:07:36.850 } 00:07:36.850 EOF 00:07:36.850 )") 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1051057 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:36.850 { 00:07:36.850 "params": { 00:07:36.850 "name": "Nvme$subsystem", 00:07:36.850 "trtype": "$TEST_TRANSPORT", 00:07:36.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.850 "adrfam": "ipv4", 00:07:36.850 "trsvcid": "$NVMF_PORT", 00:07:36.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.850 "hdgst": ${hdgst:-false}, 00:07:36.850 "ddgst": ${ddgst:-false} 00:07:36.850 }, 00:07:36.850 "method": "bdev_nvme_attach_controller" 00:07:36.850 } 00:07:36.850 EOF 00:07:36.850 )") 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:36.850 { 00:07:36.850 "params": { 00:07:36.850 "name": "Nvme$subsystem", 00:07:36.850 "trtype": "$TEST_TRANSPORT", 00:07:36.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.850 "adrfam": "ipv4", 00:07:36.850 "trsvcid": "$NVMF_PORT", 00:07:36.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.850 "hdgst": ${hdgst:-false}, 00:07:36.850 "ddgst": ${ddgst:-false} 00:07:36.850 }, 00:07:36.850 "method": "bdev_nvme_attach_controller" 00:07:36.850 } 00:07:36.850 EOF 00:07:36.850 )") 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1051050 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:36.850 "params": { 00:07:36.850 "name": "Nvme1", 00:07:36.850 "trtype": "tcp", 00:07:36.850 "traddr": "10.0.0.2", 00:07:36.850 "adrfam": "ipv4", 00:07:36.850 "trsvcid": "4420", 00:07:36.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:36.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:36.850 "hdgst": false, 00:07:36.850 "ddgst": false 00:07:36.850 }, 00:07:36.850 "method": "bdev_nvme_attach_controller" 00:07:36.850 }' 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:36.850 "params": { 00:07:36.850 "name": "Nvme1", 00:07:36.850 "trtype": "tcp", 00:07:36.850 "traddr": "10.0.0.2", 00:07:36.850 "adrfam": "ipv4", 00:07:36.850 "trsvcid": "4420", 00:07:36.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:36.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:36.850 "hdgst": false, 00:07:36.850 "ddgst": false 00:07:36.850 }, 00:07:36.850 "method": "bdev_nvme_attach_controller" 00:07:36.850 }' 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:36.850 "params": { 00:07:36.850 "name": "Nvme1", 00:07:36.850 "trtype": "tcp", 00:07:36.850 "traddr": "10.0.0.2", 00:07:36.850 "adrfam": "ipv4", 00:07:36.850 "trsvcid": "4420", 00:07:36.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:36.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:36.850 "hdgst": false, 00:07:36.850 "ddgst": false 00:07:36.850 }, 00:07:36.850 "method": "bdev_nvme_attach_controller" 00:07:36.850 }' 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:36.850 18:23:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:36.850 "params": { 00:07:36.850 "name": "Nvme1", 00:07:36.850 "trtype": "tcp", 00:07:36.850 "traddr": "10.0.0.2", 00:07:36.850 "adrfam": "ipv4", 00:07:36.850 "trsvcid": "4420", 00:07:36.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:36.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:36.850 "hdgst": false, 00:07:36.850 "ddgst": false 00:07:36.850 }, 00:07:36.850 "method": "bdev_nvme_attach_controller" 00:07:36.850 }' 00:07:36.850 [2024-10-08 18:23:30.879377] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:07:36.850 [2024-10-08 18:23:30.879448] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:36.850 [2024-10-08 18:23:30.882733] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:07:36.850 [2024-10-08 18:23:30.882798] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:36.850 [2024-10-08 18:23:30.883161] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:07:36.850 [2024-10-08 18:23:30.883223] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:36.850 [2024-10-08 18:23:30.884367] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:07:36.850 [2024-10-08 18:23:30.884455] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:37.111 [2024-10-08 18:23:31.084520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.111 [2024-10-08 18:23:31.155557] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:37.373 [2024-10-08 18:23:31.177772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.373 [2024-10-08 18:23:31.249353] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:07:37.373 [2024-10-08 18:23:31.272113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.373 [2024-10-08 18:23:31.343895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.373 [2024-10-08 18:23:31.346319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:07:37.373 [2024-10-08 18:23:31.410284] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:07:37.635 Running I/O for 1 seconds... 00:07:37.635 Running I/O for 1 seconds... 00:07:37.899 Running I/O for 1 seconds... 00:07:37.899 Running I/O for 1 seconds... 00:07:38.842 12913.00 IOPS, 50.44 MiB/s 00:07:38.842 Latency(us) 00:07:38.842 [2024-10-08T16:23:32.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.842 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:38.842 Nvme1n1 : 1.01 12969.81 50.66 0.00 0.00 9835.97 5215.57 16930.13 00:07:38.842 [2024-10-08T16:23:32.899Z] =================================================================================================================== 00:07:38.842 [2024-10-08T16:23:32.899Z] Total : 12969.81 50.66 0.00 0.00 9835.97 5215.57 16930.13 00:07:38.842 6180.00 IOPS, 24.14 MiB/s 00:07:38.842 Latency(us) 00:07:38.842 [2024-10-08T16:23:32.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.842 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:38.842 Nvme1n1 : 1.02 6223.94 24.31 0.00 0.00 20429.06 8246.61 28617.39 00:07:38.842 [2024-10-08T16:23:32.899Z] =================================================================================================================== 00:07:38.842 [2024-10-08T16:23:32.899Z] Total : 6223.94 24.31 0.00 0.00 20429.06 8246.61 28617.39 00:07:38.842 188016.00 IOPS, 734.44 MiB/s 00:07:38.842 Latency(us) 00:07:38.842 [2024-10-08T16:23:32.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.842 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:38.842 Nvme1n1 : 1.00 187631.85 732.94 0.00 0.00 678.40 319.15 2034.35 00:07:38.842 [2024-10-08T16:23:32.899Z] =================================================================================================================== 00:07:38.842 [2024-10-08T16:23:32.899Z] Total : 187631.85 732.94 0.00 0.00 678.40 319.15 2034.35 00:07:38.842 6159.00 IOPS, 24.06 MiB/s 00:07:38.842 Latency(us) 00:07:38.842 [2024-10-08T16:23:32.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.842 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:38.842 Nvme1n1 : 1.01 6242.22 24.38 0.00 0.00 20429.05 5761.71 45219.84 00:07:38.842 [2024-10-08T16:23:32.899Z] =================================================================================================================== 00:07:38.842 [2024-10-08T16:23:32.899Z] Total : 6242.22 24.38 0.00 0.00 20429.05 5761.71 45219.84 00:07:38.842 18:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1051052 00:07:39.104 18:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1051054 00:07:39.104 18:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1051057 00:07:39.104 18:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.104 18:23:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:39.104 rmmod nvme_tcp 00:07:39.104 rmmod nvme_fabrics 00:07:39.104 rmmod nvme_keyring 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1050698 ']' 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1050698 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1050698 ']' 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1050698 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050698 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050698' 00:07:39.104 killing process with pid 1050698 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1050698 00:07:39.104 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1050698 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.364 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.365 18:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:41.913 00:07:41.913 real 0m13.612s 00:07:41.913 user 0m21.130s 00:07:41.913 sys 0m7.760s 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.913 ************************************ 00:07:41.913 END TEST nvmf_bdev_io_wait 00:07:41.913 ************************************ 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.913 ************************************ 00:07:41.913 START TEST nvmf_queue_depth 00:07:41.913 ************************************ 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:41.913 * Looking for test storage... 00:07:41.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.913 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:41.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.914 --rc genhtml_branch_coverage=1 00:07:41.914 --rc genhtml_function_coverage=1 00:07:41.914 --rc genhtml_legend=1 00:07:41.914 --rc geninfo_all_blocks=1 00:07:41.914 --rc geninfo_unexecuted_blocks=1 00:07:41.914 00:07:41.914 ' 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:41.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.914 --rc genhtml_branch_coverage=1 00:07:41.914 --rc genhtml_function_coverage=1 00:07:41.914 --rc genhtml_legend=1 00:07:41.914 --rc geninfo_all_blocks=1 00:07:41.914 --rc geninfo_unexecuted_blocks=1 00:07:41.914 00:07:41.914 ' 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:41.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.914 --rc genhtml_branch_coverage=1 00:07:41.914 --rc genhtml_function_coverage=1 00:07:41.914 --rc genhtml_legend=1 00:07:41.914 --rc geninfo_all_blocks=1 00:07:41.914 --rc geninfo_unexecuted_blocks=1 00:07:41.914 00:07:41.914 ' 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:41.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.914 --rc genhtml_branch_coverage=1 00:07:41.914 --rc genhtml_function_coverage=1 00:07:41.914 --rc genhtml_legend=1 00:07:41.914 --rc geninfo_all_blocks=1 00:07:41.914 --rc geninfo_unexecuted_blocks=1 00:07:41.914 00:07:41.914 ' 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.914 18:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:50.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:50.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:50.074 Found net devices under 0000:31:00.0: cvl_0_0 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:50.074 Found net devices under 0000:31:00.1: cvl_0_1 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:50.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:07:50.074 00:07:50.074 --- 10.0.0.2 ping statistics --- 00:07:50.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.074 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:07:50.074 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:07:50.074 00:07:50.074 --- 10.0.0.1 ping statistics --- 00:07:50.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.075 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1055820 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1055820 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1055820 ']' 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.075 18:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.075 [2024-10-08 18:23:43.509792] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:07:50.075 [2024-10-08 18:23:43.509860] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.075 [2024-10-08 18:23:43.603184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.075 [2024-10-08 18:23:43.695558] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.075 [2024-10-08 18:23:43.695620] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.075 [2024-10-08 18:23:43.695628] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.075 [2024-10-08 18:23:43.695635] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.075 [2024-10-08 18:23:43.695642] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.075 [2024-10-08 18:23:43.696429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.335 [2024-10-08 18:23:44.373070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.335 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.595 Malloc0 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.595 [2024-10-08 18:23:44.447076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1056164 00:07:50.595 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.596 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:50.596 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1056164 /var/tmp/bdevperf.sock 00:07:50.596 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1056164 ']' 00:07:50.596 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.596 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.596 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.596 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.596 18:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.596 [2024-10-08 18:23:44.504153] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:07:50.596 [2024-10-08 18:23:44.504212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056164 ] 00:07:50.596 [2024-10-08 18:23:44.587124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.855 [2024-10-08 18:23:44.682250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.426 18:23:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.426 18:23:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:51.426 18:23:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:51.426 18:23:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.426 18:23:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.686 NVMe0n1 00:07:51.686 18:23:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.686 18:23:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:51.686 Running I/O for 10 seconds... 00:07:53.702 8192.00 IOPS, 32.00 MiB/s [2024-10-08T16:23:48.795Z] 9773.00 IOPS, 38.18 MiB/s [2024-10-08T16:23:49.736Z] 10446.00 IOPS, 40.80 MiB/s [2024-10-08T16:23:50.679Z] 10931.00 IOPS, 42.70 MiB/s [2024-10-08T16:23:52.065Z] 11385.40 IOPS, 44.47 MiB/s [2024-10-08T16:23:53.007Z] 11769.33 IOPS, 45.97 MiB/s [2024-10-08T16:23:53.948Z] 12048.86 IOPS, 47.07 MiB/s [2024-10-08T16:23:54.890Z] 12248.00 IOPS, 47.84 MiB/s [2024-10-08T16:23:55.832Z] 12403.11 IOPS, 48.45 MiB/s [2024-10-08T16:23:55.832Z] 12565.50 IOPS, 49.08 MiB/s 00:08:01.775 Latency(us) 00:08:01.775 [2024-10-08T16:23:55.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.775 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:01.775 Verification LBA range: start 0x0 length 0x4000 00:08:01.775 NVMe0n1 : 10.06 12590.02 49.18 0.00 0.00 81034.47 22063.79 85196.80 00:08:01.775 [2024-10-08T16:23:55.832Z] =================================================================================================================== 00:08:01.775 [2024-10-08T16:23:55.832Z] Total : 12590.02 49.18 0.00 0.00 81034.47 22063.79 85196.80 00:08:01.775 { 00:08:01.775 "results": [ 00:08:01.775 { 00:08:01.775 "job": "NVMe0n1", 00:08:01.775 "core_mask": "0x1", 00:08:01.775 "workload": "verify", 00:08:01.775 "status": "finished", 00:08:01.775 "verify_range": { 00:08:01.775 "start": 0, 00:08:01.775 "length": 16384 00:08:01.775 }, 00:08:01.775 "queue_depth": 1024, 00:08:01.775 "io_size": 4096, 00:08:01.775 "runtime": 10.058603, 00:08:01.775 "iops": 12590.018713334248, 00:08:01.775 "mibps": 49.179760598961906, 00:08:01.775 "io_failed": 0, 00:08:01.775 "io_timeout": 0, 00:08:01.775 "avg_latency_us": 81034.46772985464, 00:08:01.775 "min_latency_us": 22063.786666666667, 00:08:01.775 "max_latency_us": 85196.8 00:08:01.775 } 00:08:01.775 ], 00:08:01.775 "core_count": 1 00:08:01.775 } 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1056164 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1056164 ']' 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1056164 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1056164 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1056164' 00:08:01.775 killing process with pid 1056164 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1056164 00:08:01.775 Received shutdown signal, test time was about 10.000000 seconds 00:08:01.775 00:08:01.775 Latency(us) 00:08:01.775 [2024-10-08T16:23:55.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.775 [2024-10-08T16:23:55.832Z] =================================================================================================================== 00:08:01.775 [2024-10-08T16:23:55.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:01.775 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1056164 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:02.037 rmmod nvme_tcp 00:08:02.037 rmmod nvme_fabrics 00:08:02.037 rmmod nvme_keyring 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1055820 ']' 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1055820 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1055820 ']' 00:08:02.037 18:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1055820 00:08:02.037 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:02.037 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.037 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1055820 00:08:02.037 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:02.037 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:02.037 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1055820' 00:08:02.037 killing process with pid 1055820 00:08:02.037 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1055820 00:08:02.037 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1055820 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.298 18:23:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:04.842 00:08:04.842 real 0m22.791s 00:08:04.842 user 0m26.026s 00:08:04.842 sys 0m7.146s 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:04.842 ************************************ 00:08:04.842 END TEST nvmf_queue_depth 00:08:04.842 ************************************ 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:04.842 ************************************ 00:08:04.842 START TEST nvmf_target_multipath 00:08:04.842 ************************************ 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:04.842 * Looking for test storage... 00:08:04.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:04.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.842 --rc genhtml_branch_coverage=1 00:08:04.842 --rc genhtml_function_coverage=1 00:08:04.842 --rc genhtml_legend=1 00:08:04.842 --rc geninfo_all_blocks=1 00:08:04.842 --rc geninfo_unexecuted_blocks=1 00:08:04.842 00:08:04.842 ' 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:04.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.842 --rc genhtml_branch_coverage=1 00:08:04.842 --rc genhtml_function_coverage=1 00:08:04.842 --rc genhtml_legend=1 00:08:04.842 --rc geninfo_all_blocks=1 00:08:04.842 --rc geninfo_unexecuted_blocks=1 00:08:04.842 00:08:04.842 ' 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:04.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.842 --rc genhtml_branch_coverage=1 00:08:04.842 --rc genhtml_function_coverage=1 00:08:04.842 --rc genhtml_legend=1 00:08:04.842 --rc geninfo_all_blocks=1 00:08:04.842 --rc geninfo_unexecuted_blocks=1 00:08:04.842 00:08:04.842 ' 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:04.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.842 --rc genhtml_branch_coverage=1 00:08:04.842 --rc genhtml_function_coverage=1 00:08:04.842 --rc genhtml_legend=1 00:08:04.842 --rc geninfo_all_blocks=1 00:08:04.842 --rc geninfo_unexecuted_blocks=1 00:08:04.842 00:08:04.842 ' 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.842 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:04.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:04.843 18:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:12.983 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:12.983 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:12.983 Found net devices under 0000:31:00.0: cvl_0_0 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:12.983 Found net devices under 0000:31:00.1: cvl_0_1 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.983 18:24:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.983 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.983 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.983 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:12.983 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.983 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.983 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.983 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:12.983 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:12.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:08:12.983 00:08:12.983 --- 10.0.0.2 ping statistics --- 00:08:12.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.983 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:08:12.983 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:08:12.984 00:08:12.984 --- 10.0.0.1 ping statistics --- 00:08:12.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.984 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:12.984 only one NIC for nvmf test 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.984 rmmod nvme_tcp 00:08:12.984 rmmod nvme_fabrics 00:08:12.984 rmmod nvme_keyring 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.984 18:24:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.900 00:08:14.900 real 0m10.124s 00:08:14.900 user 0m2.201s 00:08:14.900 sys 0m5.835s 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:14.900 ************************************ 00:08:14.900 END TEST nvmf_target_multipath 00:08:14.900 ************************************ 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.900 ************************************ 00:08:14.900 START TEST nvmf_zcopy 00:08:14.900 ************************************ 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:14.900 * Looking for test storage... 00:08:14.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:14.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.900 --rc genhtml_branch_coverage=1 00:08:14.900 --rc genhtml_function_coverage=1 00:08:14.900 --rc genhtml_legend=1 00:08:14.900 --rc geninfo_all_blocks=1 00:08:14.900 --rc geninfo_unexecuted_blocks=1 00:08:14.900 00:08:14.900 ' 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:14.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.900 --rc genhtml_branch_coverage=1 00:08:14.900 --rc genhtml_function_coverage=1 00:08:14.900 --rc genhtml_legend=1 00:08:14.900 --rc geninfo_all_blocks=1 00:08:14.900 --rc geninfo_unexecuted_blocks=1 00:08:14.900 00:08:14.900 ' 00:08:14.900 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:14.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.900 --rc genhtml_branch_coverage=1 00:08:14.900 --rc genhtml_function_coverage=1 00:08:14.901 --rc genhtml_legend=1 00:08:14.901 --rc geninfo_all_blocks=1 00:08:14.901 --rc geninfo_unexecuted_blocks=1 00:08:14.901 00:08:14.901 ' 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:14.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.901 --rc genhtml_branch_coverage=1 00:08:14.901 --rc genhtml_function_coverage=1 00:08:14.901 --rc genhtml_legend=1 00:08:14.901 --rc geninfo_all_blocks=1 00:08:14.901 --rc geninfo_unexecuted_blocks=1 00:08:14.901 00:08:14.901 ' 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.901 18:24:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:23.044 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.044 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:23.044 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:23.045 Found net devices under 0000:31:00.0: cvl_0_0 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:23.045 Found net devices under 0000:31:00.1: cvl_0_1 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:08:23.045 00:08:23.045 --- 10.0.0.2 ping statistics --- 00:08:23.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.045 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:08:23.045 00:08:23.045 --- 10.0.0.1 ping statistics --- 00:08:23.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.045 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1067558 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1067558 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1067558 ']' 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.045 18:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.045 [2024-10-08 18:24:16.581742] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:08:23.045 [2024-10-08 18:24:16.581806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.045 [2024-10-08 18:24:16.668620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.045 [2024-10-08 18:24:16.759792] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.045 [2024-10-08 18:24:16.759853] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.045 [2024-10-08 18:24:16.759862] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.045 [2024-10-08 18:24:16.759870] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.045 [2024-10-08 18:24:16.759876] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.045 [2024-10-08 18:24:16.760707] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.618 [2024-10-08 18:24:17.449056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.618 [2024-10-08 18:24:17.473327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.618 malloc0 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:23.618 { 00:08:23.618 "params": { 00:08:23.618 "name": "Nvme$subsystem", 00:08:23.618 "trtype": "$TEST_TRANSPORT", 00:08:23.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.618 "adrfam": "ipv4", 00:08:23.618 "trsvcid": "$NVMF_PORT", 00:08:23.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.618 "hdgst": ${hdgst:-false}, 00:08:23.618 "ddgst": ${ddgst:-false} 00:08:23.618 }, 00:08:23.618 "method": "bdev_nvme_attach_controller" 00:08:23.618 } 00:08:23.618 EOF 00:08:23.618 )") 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:23.618 18:24:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:23.618 "params": { 00:08:23.618 "name": "Nvme1", 00:08:23.618 "trtype": "tcp", 00:08:23.618 "traddr": "10.0.0.2", 00:08:23.618 "adrfam": "ipv4", 00:08:23.618 "trsvcid": "4420", 00:08:23.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:23.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:23.619 "hdgst": false, 00:08:23.619 "ddgst": false 00:08:23.619 }, 00:08:23.619 "method": "bdev_nvme_attach_controller" 00:08:23.619 }' 00:08:23.619 [2024-10-08 18:24:17.597032] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:08:23.619 [2024-10-08 18:24:17.597098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067881 ] 00:08:23.880 [2024-10-08 18:24:17.678026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.880 [2024-10-08 18:24:17.773749] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.148 Running I/O for 10 seconds... 00:08:26.479 6034.00 IOPS, 47.14 MiB/s [2024-10-08T16:24:21.480Z] 6092.00 IOPS, 47.59 MiB/s [2024-10-08T16:24:22.421Z] 6102.33 IOPS, 47.67 MiB/s [2024-10-08T16:24:23.363Z] 6728.75 IOPS, 52.57 MiB/s [2024-10-08T16:24:24.301Z] 7234.20 IOPS, 56.52 MiB/s [2024-10-08T16:24:25.241Z] 7569.50 IOPS, 59.14 MiB/s [2024-10-08T16:24:26.181Z] 7810.57 IOPS, 61.02 MiB/s [2024-10-08T16:24:27.562Z] 7990.25 IOPS, 62.42 MiB/s [2024-10-08T16:24:28.500Z] 8126.44 IOPS, 63.49 MiB/s [2024-10-08T16:24:28.500Z] 8235.80 IOPS, 64.34 MiB/s 00:08:34.443 Latency(us) 00:08:34.443 [2024-10-08T16:24:28.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.443 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:34.443 Verification LBA range: start 0x0 length 0x1000 00:08:34.443 Nvme1n1 : 10.01 8237.67 64.36 0.00 0.00 15492.85 805.55 29491.20 00:08:34.443 [2024-10-08T16:24:28.500Z] =================================================================================================================== 00:08:34.443 [2024-10-08T16:24:28.500Z] Total : 8237.67 64.36 0.00 0.00 15492.85 805.55 29491.20 00:08:34.443 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1069923 00:08:34.443 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:34.443 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.443 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:34.443 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:34.443 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:34.443 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:34.443 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:34.443 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:34.443 { 00:08:34.443 "params": { 00:08:34.443 "name": "Nvme$subsystem", 00:08:34.444 "trtype": "$TEST_TRANSPORT", 00:08:34.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:34.444 "adrfam": "ipv4", 00:08:34.444 "trsvcid": "$NVMF_PORT", 00:08:34.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:34.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:34.444 "hdgst": ${hdgst:-false}, 00:08:34.444 "ddgst": ${ddgst:-false} 00:08:34.444 }, 00:08:34.444 "method": "bdev_nvme_attach_controller" 00:08:34.444 } 00:08:34.444 EOF 00:08:34.444 )") 00:08:34.444 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:34.444 [2024-10-08 18:24:28.260888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.260918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:34.444 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:34.444 18:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:34.444 "params": { 00:08:34.444 "name": "Nvme1", 00:08:34.444 "trtype": "tcp", 00:08:34.444 "traddr": "10.0.0.2", 00:08:34.444 "adrfam": "ipv4", 00:08:34.444 "trsvcid": "4420", 00:08:34.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:34.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:34.444 "hdgst": false, 00:08:34.444 "ddgst": false 00:08:34.444 }, 00:08:34.444 "method": "bdev_nvme_attach_controller" 00:08:34.444 }' 00:08:34.444 [2024-10-08 18:24:28.272890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.272899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.284917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.284924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.296949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.296957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.306196] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:08:34.444 [2024-10-08 18:24:28.306279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069923 ] 00:08:34.444 [2024-10-08 18:24:28.308983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.308991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.321013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.321021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.333040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.333047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.345070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.345077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.357101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.357107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.369132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.369138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.381161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.381169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.387294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.444 [2024-10-08 18:24:28.393192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.393200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.405221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.405229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.417251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.417264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.429293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.429303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.441000] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.444 [2024-10-08 18:24:28.441313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.441321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.453347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.453357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.465378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.465391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.477406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.477415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.444 [2024-10-08 18:24:28.489438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.444 [2024-10-08 18:24:28.489451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.501466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.501474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.513495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.513502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.525534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.525550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.537562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.537571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.549592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.549602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.561623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.561631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.573653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.573660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.585686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.585692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.597719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.597728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.609749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.609756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.621780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.621786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.633812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.633819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.645845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.645853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.657874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.657881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.669905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.669912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.681939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.681946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.693971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.693980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.706009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.706022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 Running I/O for 5 seconds... 00:08:34.705 [2024-10-08 18:24:28.721709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.721725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.734564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.734580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.747555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.747571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.705 [2024-10-08 18:24:28.761184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.705 [2024-10-08 18:24:28.761199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.965 [2024-10-08 18:24:28.774848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.965 [2024-10-08 18:24:28.774864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.965 [2024-10-08 18:24:28.787319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.965 [2024-10-08 18:24:28.787333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.965 [2024-10-08 18:24:28.800003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.965 [2024-10-08 18:24:28.800018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.965 [2024-10-08 18:24:28.813786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.965 [2024-10-08 18:24:28.813801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.965 [2024-10-08 18:24:28.826898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.965 [2024-10-08 18:24:28.826914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.965 [2024-10-08 18:24:28.840724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.965 [2024-10-08 18:24:28.840739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.965 [2024-10-08 18:24:28.853469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.965 [2024-10-08 18:24:28.853484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.965 [2024-10-08 18:24:28.866522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.965 [2024-10-08 18:24:28.866537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.965 [2024-10-08 18:24:28.880040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.965 [2024-10-08 18:24:28.880055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.965 [2024-10-08 18:24:28.893020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.965 [2024-10-08 18:24:28.893035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.966 [2024-10-08 18:24:28.906197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.966 [2024-10-08 18:24:28.906212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.966 [2024-10-08 18:24:28.919730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.966 [2024-10-08 18:24:28.919746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.966 [2024-10-08 18:24:28.932777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.966 [2024-10-08 18:24:28.932791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.966 [2024-10-08 18:24:28.946190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.966 [2024-10-08 18:24:28.946206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.966 [2024-10-08 18:24:28.959854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.966 [2024-10-08 18:24:28.959870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.966 [2024-10-08 18:24:28.972856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.966 [2024-10-08 18:24:28.972871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.966 [2024-10-08 18:24:28.985660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.966 [2024-10-08 18:24:28.985675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.966 [2024-10-08 18:24:28.998522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.966 [2024-10-08 18:24:28.998539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.966 [2024-10-08 18:24:29.012313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.966 [2024-10-08 18:24:29.012330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.024664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.024680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.037520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.037536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.050170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.050184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.063923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.063937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.077160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.077175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.089953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.089968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.103058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.103072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.116617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.116633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.130378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.130393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.142942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.142957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.155424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.155438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.168547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.168562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.182168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.182184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.195547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.195562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.208634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.208649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.221650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.221665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.234286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.234301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.247442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.247457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.260113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.260128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.227 [2024-10-08 18:24:29.272986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.227 [2024-10-08 18:24:29.273001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.286393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.286409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.299201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.299216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.311917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.311931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.324381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.324395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.337391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.337406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.350896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.350912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.363664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.363678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.376592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.376607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.389598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.389613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.403202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.403217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.416647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.416661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.429984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.429999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.443248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.443262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.456508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.456522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.469348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.469363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.482943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.482958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.495741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.495757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.509075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.509090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.521737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.521752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.488 [2024-10-08 18:24:29.535028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.488 [2024-10-08 18:24:29.535043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.547878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.547893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.560933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.560948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.574289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.574304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.587758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.587772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.600240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.600254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.613517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.613532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.626761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.626776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.640162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.640177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.652641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.652656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.665531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.665546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.678811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.678826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.692397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.692412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.704284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.704305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 19154.00 IOPS, 149.64 MiB/s [2024-10-08T16:24:29.805Z] [2024-10-08 18:24:29.717890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.717904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.730290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.730304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.743636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.743650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.757080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.757095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.770425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.770440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.783918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.783932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.748 [2024-10-08 18:24:29.797426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.748 [2024-10-08 18:24:29.797441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.811069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.811084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.824783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.824798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.837561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.837575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.850764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.850779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.863709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.863724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.876198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.876212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.888651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.888666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.901718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.901733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.914647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.914662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.928171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.928186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.941253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.941267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.954430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.954448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.968085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.968100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.981730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.981745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:29.994409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:29.994424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:30.007429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:30.007444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:30.020077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:30.020091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:30.033326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:30.033340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:30.046682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:30.046697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.008 [2024-10-08 18:24:30.060219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.008 [2024-10-08 18:24:30.060234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.073108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.073123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.085715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.085731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.097995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.098010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.111472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.111487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.124413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.124428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.137635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.137650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.150855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.150870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.163637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.163652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.176960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.176979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.190442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.190457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.203651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.203670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.216196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.216210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.228972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.228991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.242506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.242520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.255869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.255883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.269223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.269238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.282415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.282430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.295946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.295961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.308953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.308968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.269 [2024-10-08 18:24:30.322436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.269 [2024-10-08 18:24:30.322450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.335008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.335023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.348465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.348479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.362025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.362040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.375347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.375362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.388881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.388896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.402317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.402331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.415193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.415207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.428384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.428398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.441527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.441541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.454893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.454908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.467564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.467579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.480403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.480418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.493854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.493869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.506385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.506399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.519924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.519938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.532837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.532851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.545439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.545453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.558050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.558064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.571280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.571295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.529 [2024-10-08 18:24:30.583902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.529 [2024-10-08 18:24:30.583917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.596950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.596965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.610424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.610439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.622711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.622725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.636193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.636208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.649397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.649412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.662316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.662331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.675708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.675723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.689093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.689109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.703041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.703056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 19237.00 IOPS, 150.29 MiB/s [2024-10-08T16:24:30.847Z] [2024-10-08 18:24:30.714255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.714269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.727003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.727018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.739451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.739465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.752562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.752577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.765177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.765192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.778125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.778139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.791221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.791236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.803923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.803938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.816716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.816731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.829995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.830011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.790 [2024-10-08 18:24:30.842442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.790 [2024-10-08 18:24:30.842457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.854808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.854823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.868047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.868062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.881481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.881496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.894948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.894963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.908204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.908218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.921162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.921176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.934581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.934601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.947499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.947513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.960775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.960790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.974481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.974496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:30.987851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:30.987866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:31.001070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:31.001085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:31.014430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:31.014444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:31.027799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:31.027813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:31.040582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:31.040597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:31.053030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:31.053045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:31.066560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:31.066575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:31.080362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:31.080377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.050 [2024-10-08 18:24:31.093926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.050 [2024-10-08 18:24:31.093940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.107142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.107157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.120022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.120037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.132661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.132675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.146456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.146471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.159529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.159543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.172430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.172445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.184887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.184905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.197708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.197723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.210702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.210717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.224202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.224217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.236879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.236893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.250518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.250532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.263837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.263852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.277296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.309 [2024-10-08 18:24:31.277310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.309 [2024-10-08 18:24:31.290180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.310 [2024-10-08 18:24:31.290195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.310 [2024-10-08 18:24:31.303576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.310 [2024-10-08 18:24:31.303591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.310 [2024-10-08 18:24:31.316089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.310 [2024-10-08 18:24:31.316104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.310 [2024-10-08 18:24:31.329006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.310 [2024-10-08 18:24:31.329020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.310 [2024-10-08 18:24:31.342465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.310 [2024-10-08 18:24:31.342480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.310 [2024-10-08 18:24:31.355907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.310 [2024-10-08 18:24:31.355922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.368369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.368384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.381453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.381467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.394350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.394364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.406604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.406619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.419346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.419361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.431997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.432014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.444889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.444903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.457960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.457980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.471014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.471028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.484123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.484137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.497780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.497794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.510879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.510893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.524356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.524370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.537157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.537171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.550566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.550580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.563861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.563875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.577357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.577371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.589967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.589985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.603433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.603448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.570 [2024-10-08 18:24:31.616009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.570 [2024-10-08 18:24:31.616023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-10-08 18:24:31.628353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-10-08 18:24:31.628367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-10-08 18:24:31.641819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-10-08 18:24:31.641833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-10-08 18:24:31.654478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-10-08 18:24:31.654492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-10-08 18:24:31.666995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-10-08 18:24:31.667009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-10-08 18:24:31.680317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-10-08 18:24:31.680336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-10-08 18:24:31.692954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-10-08 18:24:31.692968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-10-08 18:24:31.705244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-10-08 18:24:31.705257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 19242.00 IOPS, 150.33 MiB/s [2024-10-08T16:24:31.887Z] [2024-10-08 18:24:31.717795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-10-08 18:24:31.717810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-10-08 18:24:31.730405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-10-08 18:24:31.730420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-10-08 18:24:31.743364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.743378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.831 [2024-10-08 18:24:31.756891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.756905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.831 [2024-10-08 18:24:31.770273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.770288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.831 [2024-10-08 18:24:31.783406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.783420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.831 [2024-10-08 18:24:31.796915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.796930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.831 [2024-10-08 18:24:31.809517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.809531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.831 [2024-10-08 18:24:31.822966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.822985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.831 [2024-10-08 18:24:31.835509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.835524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.831 [2024-10-08 18:24:31.848012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.848026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.831 [2024-10-08 18:24:31.861355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.861369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.831 [2024-10-08 18:24:31.874602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.831 [2024-10-08 18:24:31.874616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:31.888053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:31.888068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:31.901610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:31.901624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:31.914868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:31.914883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:31.928223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:31.928237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:31.941650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:31.941664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:31.955091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:31.955106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:31.967828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:31.967842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:31.980677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:31.980692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:31.994192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:31.994207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.006865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.006879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.019694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.019708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.033389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.033403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.046698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.046713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.060439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.060454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.074115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.074129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.087497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.087512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.100202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.100217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.113474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.113488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.126520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.126535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.091 [2024-10-08 18:24:32.139770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.091 [2024-10-08 18:24:32.139784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.153135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.153149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.165714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.165728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.179000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.179015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.191516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.191531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.204713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.204727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.218208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.218222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.231449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.231464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.244505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.244519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.257725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.257739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.271448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.271462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.283924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.283939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.296650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.296664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.309820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.309834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.323448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.323463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.336747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.336761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.350258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.350273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.362713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.362728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.375367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.375381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.388994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.389008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.352 [2024-10-08 18:24:32.401282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.352 [2024-10-08 18:24:32.401297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.414563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.414580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.427755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.427770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.441360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.441375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.453766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.453781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.466479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.466493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.479366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.479381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.492862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.492877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.506864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.506879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.519294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.519309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.532186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.532201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.545681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.545696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.558942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.558957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.571667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.571681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.585125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.585139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.598346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.598361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.611929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.611944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.625346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.625361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.639018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.639033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.652311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.652326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.613 [2024-10-08 18:24:32.665824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.613 [2024-10-08 18:24:32.665842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.679156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.679172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.692412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.692427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.705138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.705152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 19255.25 IOPS, 150.43 MiB/s [2024-10-08T16:24:32.931Z] [2024-10-08 18:24:32.718344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.718359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.731632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.731647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.744927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.744942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.758224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.758238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.771041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.771055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.784365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.784379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.796922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.796937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.809683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.809697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.822411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.822426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.836024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.836039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.849634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.849649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.863095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.863111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.876584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.876599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.890032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.890047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.902587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.902601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.915674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.915692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.874 [2024-10-08 18:24:32.928410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.874 [2024-10-08 18:24:32.928425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:32.940925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:32.940939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:32.953693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:32.953707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:32.966505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:32.966520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:32.979155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:32.979169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:32.992158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:32.992173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:33.005807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:33.005821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:33.019327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:33.019341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:33.032625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:33.032639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:33.046019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:33.046033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:33.058951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:33.058965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:33.071525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:33.071539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:33.085194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.136 [2024-10-08 18:24:33.085209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.136 [2024-10-08 18:24:33.098511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.137 [2024-10-08 18:24:33.098525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.137 [2024-10-08 18:24:33.111652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.137 [2024-10-08 18:24:33.111666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.137 [2024-10-08 18:24:33.124983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.137 [2024-10-08 18:24:33.124997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.137 [2024-10-08 18:24:33.138735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.137 [2024-10-08 18:24:33.138749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.137 [2024-10-08 18:24:33.151417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.137 [2024-10-08 18:24:33.151432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.137 [2024-10-08 18:24:33.164507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.137 [2024-10-08 18:24:33.164525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.137 [2024-10-08 18:24:33.177007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.137 [2024-10-08 18:24:33.177022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.137 [2024-10-08 18:24:33.190307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.137 [2024-10-08 18:24:33.190321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.203261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.203276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.216052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.216067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.229515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.229529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.243279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.243293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.256498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.256512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.269560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.269575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.282325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.282339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.295238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.295252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.308793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.308807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.322195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.322210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.335342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.335356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.348772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.348787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.362148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.362162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.375352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.375367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.388253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.388267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.401300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.401314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.414177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.414192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.427517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.427531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.440761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.440776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.398 [2024-10-08 18:24:33.454279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.398 [2024-10-08 18:24:33.454294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.467209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.467225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.480112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.480127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.493600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.493615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.506861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.506875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.519587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.519601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.532273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.532289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.544890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.544905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.557148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.557162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.570222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.570237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.583950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.583964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.597224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.597238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.610689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.610703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.623468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.623482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.636086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.636100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.649747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.649761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.662703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.662717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.676135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.676150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.689085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.689099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.701948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.701962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.659 [2024-10-08 18:24:33.715552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.659 [2024-10-08 18:24:33.715566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 19257.80 IOPS, 150.45 MiB/s 00:08:39.919 Latency(us) 00:08:39.919 [2024-10-08T16:24:33.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.919 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:39.919 Nvme1n1 : 5.00 19267.00 150.52 0.00 0.00 6638.16 2826.24 18459.31 00:08:39.919 [2024-10-08T16:24:33.976Z] =================================================================================================================== 00:08:39.919 [2024-10-08T16:24:33.976Z] Total : 19267.00 150.52 0.00 0.00 6638.16 2826.24 18459.31 00:08:39.919 [2024-10-08 18:24:33.725315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.919 [2024-10-08 18:24:33.725328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 [2024-10-08 18:24:33.737342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.919 [2024-10-08 18:24:33.737355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 [2024-10-08 18:24:33.749380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.919 [2024-10-08 18:24:33.749393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 [2024-10-08 18:24:33.761406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.919 [2024-10-08 18:24:33.761417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 [2024-10-08 18:24:33.773434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.919 [2024-10-08 18:24:33.773444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 [2024-10-08 18:24:33.785461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.919 [2024-10-08 18:24:33.785470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 [2024-10-08 18:24:33.797490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.919 [2024-10-08 18:24:33.797499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 [2024-10-08 18:24:33.809524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.919 [2024-10-08 18:24:33.809536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 [2024-10-08 18:24:33.821551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.919 [2024-10-08 18:24:33.821561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 [2024-10-08 18:24:33.833584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.919 [2024-10-08 18:24:33.833593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1069923) - No such process 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1069923 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.919 delay0 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.919 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.920 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.920 18:24:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:40.180 [2024-10-08 18:24:34.043152] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:48.322 [2024-10-08 18:24:41.206463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d640 is same with the state(6) to be set 00:08:48.322 Initializing NVMe Controllers 00:08:48.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:48.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:48.322 Initialization complete. Launching workers. 00:08:48.322 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 234, failed: 32822 00:08:48.322 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32919, failed to submit 137 00:08:48.322 success 32858, unsuccessful 61, failed 0 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.322 rmmod nvme_tcp 00:08:48.322 rmmod nvme_fabrics 00:08:48.322 rmmod nvme_keyring 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1067558 ']' 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1067558 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1067558 ']' 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1067558 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1067558 00:08:48.322 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1067558' 00:08:48.323 killing process with pid 1067558 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1067558 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1067558 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.323 18:24:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.707 00:08:49.707 real 0m34.993s 00:08:49.707 user 0m45.328s 00:08:49.707 sys 0m12.245s 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.707 ************************************ 00:08:49.707 END TEST nvmf_zcopy 00:08:49.707 ************************************ 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.707 ************************************ 00:08:49.707 START TEST nvmf_nmic 00:08:49.707 ************************************ 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:49.707 * Looking for test storage... 00:08:49.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:08:49.707 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:49.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.968 --rc genhtml_branch_coverage=1 00:08:49.968 --rc genhtml_function_coverage=1 00:08:49.968 --rc genhtml_legend=1 00:08:49.968 --rc geninfo_all_blocks=1 00:08:49.968 --rc geninfo_unexecuted_blocks=1 00:08:49.968 00:08:49.968 ' 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:49.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.968 --rc genhtml_branch_coverage=1 00:08:49.968 --rc genhtml_function_coverage=1 00:08:49.968 --rc genhtml_legend=1 00:08:49.968 --rc geninfo_all_blocks=1 00:08:49.968 --rc geninfo_unexecuted_blocks=1 00:08:49.968 00:08:49.968 ' 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:49.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.968 --rc genhtml_branch_coverage=1 00:08:49.968 --rc genhtml_function_coverage=1 00:08:49.968 --rc genhtml_legend=1 00:08:49.968 --rc geninfo_all_blocks=1 00:08:49.968 --rc geninfo_unexecuted_blocks=1 00:08:49.968 00:08:49.968 ' 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:49.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.968 --rc genhtml_branch_coverage=1 00:08:49.968 --rc genhtml_function_coverage=1 00:08:49.968 --rc genhtml_legend=1 00:08:49.968 --rc geninfo_all_blocks=1 00:08:49.968 --rc geninfo_unexecuted_blocks=1 00:08:49.968 00:08:49.968 ' 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.968 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:49.969 18:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:58.105 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:58.105 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:58.105 Found net devices under 0000:31:00.0: cvl_0_0 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:58.105 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:58.106 Found net devices under 0000:31:00.1: cvl_0_1 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:58.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:08:58.106 00:08:58.106 --- 10.0.0.2 ping statistics --- 00:08:58.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.106 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:08:58.106 00:08:58.106 --- 10.0.0.1 ping statistics --- 00:08:58.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.106 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1076848 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1076848 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1076848 ']' 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.106 [2024-10-08 18:24:51.689402] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:08:58.106 [2024-10-08 18:24:51.689465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.106 [2024-10-08 18:24:51.765649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.106 [2024-10-08 18:24:51.853795] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.106 [2024-10-08 18:24:51.853857] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.106 [2024-10-08 18:24:51.853866] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.106 [2024-10-08 18:24:51.853871] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.106 [2024-10-08 18:24:51.853876] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.106 [2024-10-08 18:24:51.855774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.106 [2024-10-08 18:24:51.855934] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.106 [2024-10-08 18:24:51.856046] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.106 [2024-10-08 18:24:51.856084] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:58.106 18:24:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.106 [2024-10-08 18:24:52.036912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.106 Malloc0 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.106 [2024-10-08 18:24:52.102663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:58.106 test case1: single bdev can't be used in multiple subsystems 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:58.106 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.107 [2024-10-08 18:24:52.138479] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:58.107 [2024-10-08 18:24:52.138503] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:58.107 [2024-10-08 18:24:52.138512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.107 request: 00:08:58.107 { 00:08:58.107 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:58.107 "namespace": { 00:08:58.107 "bdev_name": "Malloc0", 00:08:58.107 "no_auto_visible": false 00:08:58.107 }, 00:08:58.107 "method": "nvmf_subsystem_add_ns", 00:08:58.107 "req_id": 1 00:08:58.107 } 00:08:58.107 Got JSON-RPC error response 00:08:58.107 response: 00:08:58.107 { 00:08:58.107 "code": -32602, 00:08:58.107 "message": "Invalid parameters" 00:08:58.107 } 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:58.107 Adding namespace failed - expected result. 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:58.107 test case2: host connect to nvmf target in multiple paths 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.107 [2024-10-08 18:24:52.150665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.107 18:24:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.020 18:24:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:01.404 18:24:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.404 18:24:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:01.404 18:24:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.404 18:24:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:01.404 18:24:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:03.315 18:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:03.315 18:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:03.315 18:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:03.315 18:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:03.315 18:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:03.315 18:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:03.315 18:24:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:03.315 [global] 00:09:03.315 thread=1 00:09:03.315 invalidate=1 00:09:03.315 rw=write 00:09:03.315 time_based=1 00:09:03.315 runtime=1 00:09:03.315 ioengine=libaio 00:09:03.315 direct=1 00:09:03.315 bs=4096 00:09:03.315 iodepth=1 00:09:03.315 norandommap=0 00:09:03.315 numjobs=1 00:09:03.315 00:09:03.315 verify_dump=1 00:09:03.315 verify_backlog=512 00:09:03.315 verify_state_save=0 00:09:03.315 do_verify=1 00:09:03.315 verify=crc32c-intel 00:09:03.315 [job0] 00:09:03.315 filename=/dev/nvme0n1 00:09:03.315 Could not set queue depth (nvme0n1) 00:09:03.884 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.884 fio-3.35 00:09:03.884 Starting 1 thread 00:09:04.826 00:09:04.826 job0: (groupid=0, jobs=1): err= 0: pid=1078222: Tue Oct 8 18:24:58 2024 00:09:04.826 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:04.826 slat (nsec): min=25533, max=60202, avg=26642.72, stdev=3450.61 00:09:04.826 clat (usec): min=711, max=1332, avg=1015.64, stdev=80.31 00:09:04.826 lat (usec): min=737, max=1358, avg=1042.28, stdev=80.21 00:09:04.826 clat percentiles (usec): 00:09:04.826 | 1.00th=[ 775], 5.00th=[ 857], 10.00th=[ 914], 20.00th=[ 963], 00:09:04.826 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1029], 60.00th=[ 1037], 00:09:04.826 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1090], 95.00th=[ 1123], 00:09:04.826 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1336], 99.95th=[ 1336], 00:09:04.826 | 99.99th=[ 1336] 00:09:04.826 write: IOPS=737, BW=2949KiB/s (3020kB/s)(2952KiB/1001msec); 0 zone resets 00:09:04.826 slat (usec): min=9, max=25310, avg=64.13, stdev=930.66 00:09:04.826 clat (usec): min=223, max=772, avg=554.84, stdev=100.92 00:09:04.826 lat (usec): min=257, max=25913, avg=618.97, stdev=938.26 00:09:04.826 clat percentiles (usec): 00:09:04.826 | 1.00th=[ 314], 5.00th=[ 363], 10.00th=[ 416], 20.00th=[ 457], 00:09:04.826 | 30.00th=[ 519], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 586], 00:09:04.826 | 70.00th=[ 627], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 693], 00:09:04.826 | 99.00th=[ 742], 99.50th=[ 750], 99.90th=[ 775], 99.95th=[ 775], 00:09:04.826 | 99.99th=[ 775] 00:09:04.826 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:04.826 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:04.826 lat (usec) : 250=0.08%, 500=16.80%, 750=42.16%, 1000=13.12% 00:09:04.826 lat (msec) : 2=27.84% 00:09:04.826 cpu : usr=1.70%, sys=3.80%, ctx=1253, majf=0, minf=1 00:09:04.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:04.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.826 issued rwts: total=512,738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:04.826 00:09:04.826 Run status group 0 (all jobs): 00:09:04.826 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:04.826 WRITE: bw=2949KiB/s (3020kB/s), 2949KiB/s-2949KiB/s (3020kB/s-3020kB/s), io=2952KiB (3023kB), run=1001-1001msec 00:09:04.826 00:09:04.826 Disk stats (read/write): 00:09:04.826 nvme0n1: ios=538/563, merge=0/0, ticks=1494/300, in_queue=1794, util=98.70% 00:09:04.826 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:05.085 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.086 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:05.086 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.086 18:24:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.086 rmmod nvme_tcp 00:09:05.086 rmmod nvme_fabrics 00:09:05.086 rmmod nvme_keyring 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1076848 ']' 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1076848 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1076848 ']' 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1076848 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1076848 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1076848' 00:09:05.086 killing process with pid 1076848 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1076848 00:09:05.086 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1076848 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.345 18:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.888 00:09:07.888 real 0m17.698s 00:09:07.888 user 0m48.543s 00:09:07.888 sys 0m6.784s 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.888 ************************************ 00:09:07.888 END TEST nvmf_nmic 00:09:07.888 ************************************ 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.888 ************************************ 00:09:07.888 START TEST nvmf_fio_target 00:09:07.888 ************************************ 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:07.888 * Looking for test storage... 00:09:07.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.888 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:07.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.889 --rc genhtml_branch_coverage=1 00:09:07.889 --rc genhtml_function_coverage=1 00:09:07.889 --rc genhtml_legend=1 00:09:07.889 --rc geninfo_all_blocks=1 00:09:07.889 --rc geninfo_unexecuted_blocks=1 00:09:07.889 00:09:07.889 ' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:07.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.889 --rc genhtml_branch_coverage=1 00:09:07.889 --rc genhtml_function_coverage=1 00:09:07.889 --rc genhtml_legend=1 00:09:07.889 --rc geninfo_all_blocks=1 00:09:07.889 --rc geninfo_unexecuted_blocks=1 00:09:07.889 00:09:07.889 ' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:07.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.889 --rc genhtml_branch_coverage=1 00:09:07.889 --rc genhtml_function_coverage=1 00:09:07.889 --rc genhtml_legend=1 00:09:07.889 --rc geninfo_all_blocks=1 00:09:07.889 --rc geninfo_unexecuted_blocks=1 00:09:07.889 00:09:07.889 ' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:07.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.889 --rc genhtml_branch_coverage=1 00:09:07.889 --rc genhtml_function_coverage=1 00:09:07.889 --rc genhtml_legend=1 00:09:07.889 --rc geninfo_all_blocks=1 00:09:07.889 --rc geninfo_unexecuted_blocks=1 00:09:07.889 00:09:07.889 ' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.889 18:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.030 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:16.031 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:16.031 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:16.031 Found net devices under 0000:31:00.0: cvl_0_0 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:16.031 Found net devices under 0000:31:00.1: cvl_0_1 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.031 18:25:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:16.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:09:16.031 00:09:16.031 --- 10.0.0.2 ping statistics --- 00:09:16.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.031 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:09:16.031 00:09:16.031 --- 10.0.0.1 ping statistics --- 00:09:16.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.031 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1082885 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1082885 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1082885 ']' 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.031 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.032 18:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.032 [2024-10-08 18:25:09.417121] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:09:16.032 [2024-10-08 18:25:09.417189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.032 [2024-10-08 18:25:09.505286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.032 [2024-10-08 18:25:09.601270] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.032 [2024-10-08 18:25:09.601336] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.032 [2024-10-08 18:25:09.601349] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.032 [2024-10-08 18:25:09.601357] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.032 [2024-10-08 18:25:09.601363] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.032 [2024-10-08 18:25:09.603412] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.032 [2024-10-08 18:25:09.603578] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.032 [2024-10-08 18:25:09.603741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.032 [2024-10-08 18:25:09.603741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.293 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.293 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:16.293 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:16.293 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.293 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.293 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.293 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:16.554 [2024-10-08 18:25:10.438675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.554 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.814 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:16.814 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.074 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:17.074 18:25:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.335 18:25:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:17.335 18:25:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.335 18:25:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:17.335 18:25:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:17.595 18:25:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.855 18:25:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:17.855 18:25:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.116 18:25:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:18.116 18:25:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.116 18:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:18.116 18:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:18.376 18:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.636 18:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:18.636 18:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.636 18:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:18.636 18:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:18.897 18:25:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.159 [2024-10-08 18:25:12.983639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.159 18:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:19.159 18:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:19.419 18:25:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.331 18:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:21.331 18:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:21.331 18:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.331 18:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:21.331 18:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:21.331 18:25:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:23.302 18:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:23.303 18:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:23.303 18:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.303 18:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:23.303 18:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.303 18:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:23.303 18:25:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:23.303 [global] 00:09:23.303 thread=1 00:09:23.303 invalidate=1 00:09:23.303 rw=write 00:09:23.303 time_based=1 00:09:23.303 runtime=1 00:09:23.303 ioengine=libaio 00:09:23.303 direct=1 00:09:23.303 bs=4096 00:09:23.303 iodepth=1 00:09:23.303 norandommap=0 00:09:23.303 numjobs=1 00:09:23.303 00:09:23.303 verify_dump=1 00:09:23.303 verify_backlog=512 00:09:23.303 verify_state_save=0 00:09:23.303 do_verify=1 00:09:23.303 verify=crc32c-intel 00:09:23.303 [job0] 00:09:23.303 filename=/dev/nvme0n1 00:09:23.303 [job1] 00:09:23.303 filename=/dev/nvme0n2 00:09:23.303 [job2] 00:09:23.303 filename=/dev/nvme0n3 00:09:23.303 [job3] 00:09:23.303 filename=/dev/nvme0n4 00:09:23.303 Could not set queue depth (nvme0n1) 00:09:23.303 Could not set queue depth (nvme0n2) 00:09:23.303 Could not set queue depth (nvme0n3) 00:09:23.303 Could not set queue depth (nvme0n4) 00:09:23.303 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.303 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.303 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.303 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.303 fio-3.35 00:09:23.303 Starting 4 threads 00:09:24.785 00:09:24.785 job0: (groupid=0, jobs=1): err= 0: pid=1084560: Tue Oct 8 18:25:18 2024 00:09:24.785 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:24.785 slat (nsec): min=7641, max=43072, avg=24983.70, stdev=2554.06 00:09:24.785 clat (usec): min=710, max=1269, avg=1037.96, stdev=83.27 00:09:24.785 lat (usec): min=734, max=1294, avg=1062.94, stdev=83.05 00:09:24.785 clat percentiles (usec): 00:09:24.785 | 1.00th=[ 799], 5.00th=[ 865], 10.00th=[ 922], 20.00th=[ 988], 00:09:24.785 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1074], 00:09:24.785 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1123], 95.00th=[ 1156], 00:09:24.785 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1270], 99.95th=[ 1270], 00:09:24.785 | 99.99th=[ 1270] 00:09:24.785 write: IOPS=705, BW=2821KiB/s (2889kB/s)(2824KiB/1001msec); 0 zone resets 00:09:24.785 slat (nsec): min=9579, max=50921, avg=28422.03, stdev=8781.05 00:09:24.785 clat (usec): min=168, max=1117, avg=604.53, stdev=113.09 00:09:24.785 lat (usec): min=179, max=1129, avg=632.96, stdev=116.78 00:09:24.785 clat percentiles (usec): 00:09:24.785 | 1.00th=[ 289], 5.00th=[ 392], 10.00th=[ 453], 20.00th=[ 506], 00:09:24.785 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 652], 00:09:24.785 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 725], 95.00th=[ 758], 00:09:24.785 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 1123], 99.95th=[ 1123], 00:09:24.785 | 99.99th=[ 1123] 00:09:24.785 bw ( KiB/s): min= 4096, max= 4096, per=32.27%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.785 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.785 lat (usec) : 250=0.25%, 500=11.00%, 750=43.51%, 1000=13.30% 00:09:24.785 lat (msec) : 2=31.94% 00:09:24.785 cpu : usr=1.40%, sys=3.90%, ctx=1218, majf=0, minf=1 00:09:24.785 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.785 issued rwts: total=512,706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.785 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.785 job1: (groupid=0, jobs=1): err= 0: pid=1084565: Tue Oct 8 18:25:18 2024 00:09:24.785 read: IOPS=620, BW=2483KiB/s (2542kB/s)(2572KiB/1036msec) 00:09:24.785 slat (nsec): min=6620, max=57759, avg=23993.45, stdev=7809.41 00:09:24.785 clat (usec): min=395, max=40752, avg=766.30, stdev=1583.04 00:09:24.785 lat (usec): min=421, max=40779, avg=790.29, stdev=1583.29 00:09:24.785 clat percentiles (usec): 00:09:24.785 | 1.00th=[ 457], 5.00th=[ 523], 10.00th=[ 562], 20.00th=[ 611], 00:09:24.785 | 30.00th=[ 644], 40.00th=[ 676], 50.00th=[ 701], 60.00th=[ 742], 00:09:24.785 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[ 873], 00:09:24.785 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[40633], 99.95th=[40633], 00:09:24.785 | 99.99th=[40633] 00:09:24.785 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:09:24.785 slat (nsec): min=8959, max=67247, avg=31019.57, stdev=9761.76 00:09:24.785 clat (usec): min=129, max=825, avg=471.36, stdev=116.93 00:09:24.785 lat (usec): min=141, max=847, avg=502.38, stdev=120.37 00:09:24.785 clat percentiles (usec): 00:09:24.785 | 1.00th=[ 186], 5.00th=[ 273], 10.00th=[ 314], 20.00th=[ 375], 00:09:24.785 | 30.00th=[ 408], 40.00th=[ 445], 50.00th=[ 474], 60.00th=[ 506], 00:09:24.785 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 627], 95.00th=[ 652], 00:09:24.785 | 99.00th=[ 709], 99.50th=[ 725], 99.90th=[ 775], 99.95th=[ 824], 00:09:24.785 | 99.99th=[ 824] 00:09:24.785 bw ( KiB/s): min= 4096, max= 4096, per=32.27%, avg=4096.00, stdev= 0.00, samples=2 00:09:24.785 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:09:24.785 lat (usec) : 250=1.62%, 500=35.75%, 750=48.41%, 1000=14.10% 00:09:24.785 lat (msec) : 2=0.06%, 50=0.06% 00:09:24.785 cpu : usr=3.77%, sys=5.60%, ctx=1667, majf=0, minf=1 00:09:24.785 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.785 issued rwts: total=643,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.785 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.785 job2: (groupid=0, jobs=1): err= 0: pid=1084581: Tue Oct 8 18:25:18 2024 00:09:24.785 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:24.785 slat (nsec): min=8271, max=45006, avg=26925.03, stdev=3602.44 00:09:24.785 clat (usec): min=574, max=1232, avg=968.42, stdev=81.26 00:09:24.785 lat (usec): min=601, max=1259, avg=995.35, stdev=81.51 00:09:24.785 clat percentiles (usec): 00:09:24.785 | 1.00th=[ 750], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 906], 00:09:24.785 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:09:24.785 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1074], 00:09:24.785 | 99.00th=[ 1123], 99.50th=[ 1172], 99.90th=[ 1237], 99.95th=[ 1237], 00:09:24.785 | 99.99th=[ 1237] 00:09:24.785 write: IOPS=763, BW=3053KiB/s (3126kB/s)(3056KiB/1001msec); 0 zone resets 00:09:24.785 slat (nsec): min=9370, max=66421, avg=30426.90, stdev=9494.41 00:09:24.785 clat (usec): min=240, max=863, avg=598.79, stdev=109.39 00:09:24.785 lat (usec): min=251, max=897, avg=629.22, stdev=113.63 00:09:24.785 clat percentiles (usec): 00:09:24.785 | 1.00th=[ 306], 5.00th=[ 400], 10.00th=[ 445], 20.00th=[ 502], 00:09:24.785 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:09:24.785 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 758], 00:09:24.785 | 99.00th=[ 791], 99.50th=[ 816], 99.90th=[ 865], 99.95th=[ 865], 00:09:24.785 | 99.99th=[ 865] 00:09:24.785 bw ( KiB/s): min= 4096, max= 4096, per=32.27%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.785 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.785 lat (usec) : 250=0.08%, 500=11.52%, 750=45.14%, 1000=27.66% 00:09:24.785 lat (msec) : 2=15.60% 00:09:24.785 cpu : usr=2.70%, sys=4.90%, ctx=1276, majf=0, minf=1 00:09:24.785 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.785 issued rwts: total=512,764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.785 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.785 job3: (groupid=0, jobs=1): err= 0: pid=1084589: Tue Oct 8 18:25:18 2024 00:09:24.785 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:24.785 slat (nsec): min=8207, max=49919, avg=24538.43, stdev=5835.66 00:09:24.785 clat (usec): min=682, max=1273, avg=976.56, stdev=104.30 00:09:24.785 lat (usec): min=710, max=1289, avg=1001.10, stdev=104.52 00:09:24.785 clat percentiles (usec): 00:09:24.785 | 1.00th=[ 742], 5.00th=[ 807], 10.00th=[ 840], 20.00th=[ 889], 00:09:24.785 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 988], 60.00th=[ 1012], 00:09:24.785 | 70.00th=[ 1037], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:09:24.785 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1270], 99.95th=[ 1270], 00:09:24.785 | 99.99th=[ 1270] 00:09:24.785 write: IOPS=792, BW=3169KiB/s (3245kB/s)(3172KiB/1001msec); 0 zone resets 00:09:24.785 slat (nsec): min=9978, max=67590, avg=30127.48, stdev=9442.25 00:09:24.785 clat (usec): min=239, max=915, avg=573.62, stdev=118.98 00:09:24.785 lat (usec): min=253, max=951, avg=603.74, stdev=121.45 00:09:24.785 clat percentiles (usec): 00:09:24.785 | 1.00th=[ 306], 5.00th=[ 367], 10.00th=[ 404], 20.00th=[ 474], 00:09:24.785 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:09:24.785 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 758], 00:09:24.785 | 99.00th=[ 816], 99.50th=[ 848], 99.90th=[ 914], 99.95th=[ 914], 00:09:24.785 | 99.99th=[ 914] 00:09:24.785 bw ( KiB/s): min= 4096, max= 4096, per=32.27%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.785 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.785 lat (usec) : 250=0.15%, 500=16.32%, 750=40.92%, 1000=25.13% 00:09:24.785 lat (msec) : 2=17.47% 00:09:24.785 cpu : usr=1.90%, sys=3.50%, ctx=1305, majf=0, minf=1 00:09:24.785 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.785 issued rwts: total=512,793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.785 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.785 00:09:24.785 Run status group 0 (all jobs): 00:09:24.785 READ: bw=8413KiB/s (8615kB/s), 2046KiB/s-2483KiB/s (2095kB/s-2542kB/s), io=8716KiB (8925kB), run=1001-1036msec 00:09:24.785 WRITE: bw=12.4MiB/s (13.0MB/s), 2821KiB/s-3954KiB/s (2889kB/s-4049kB/s), io=12.8MiB (13.5MB), run=1001-1036msec 00:09:24.785 00:09:24.785 Disk stats (read/write): 00:09:24.785 nvme0n1: ios=464/512, merge=0/0, ticks=470/297, in_queue=767, util=80.36% 00:09:24.785 nvme0n2: ios=692/1024, merge=0/0, ticks=580/403, in_queue=983, util=90.83% 00:09:24.785 nvme0n3: ios=498/512, merge=0/0, ticks=479/227, in_queue=706, util=89.60% 00:09:24.785 nvme0n4: ios=514/512, merge=0/0, ticks=548/275, in_queue=823, util=92.18% 00:09:24.785 18:25:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:24.785 [global] 00:09:24.785 thread=1 00:09:24.785 invalidate=1 00:09:24.785 rw=randwrite 00:09:24.785 time_based=1 00:09:24.785 runtime=1 00:09:24.785 ioengine=libaio 00:09:24.785 direct=1 00:09:24.785 bs=4096 00:09:24.785 iodepth=1 00:09:24.785 norandommap=0 00:09:24.785 numjobs=1 00:09:24.785 00:09:24.785 verify_dump=1 00:09:24.785 verify_backlog=512 00:09:24.785 verify_state_save=0 00:09:24.785 do_verify=1 00:09:24.785 verify=crc32c-intel 00:09:24.785 [job0] 00:09:24.785 filename=/dev/nvme0n1 00:09:24.785 [job1] 00:09:24.786 filename=/dev/nvme0n2 00:09:24.786 [job2] 00:09:24.786 filename=/dev/nvme0n3 00:09:24.786 [job3] 00:09:24.786 filename=/dev/nvme0n4 00:09:24.786 Could not set queue depth (nvme0n1) 00:09:24.786 Could not set queue depth (nvme0n2) 00:09:24.786 Could not set queue depth (nvme0n3) 00:09:24.786 Could not set queue depth (nvme0n4) 00:09:25.048 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.049 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.049 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.049 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.049 fio-3.35 00:09:25.049 Starting 4 threads 00:09:26.437 00:09:26.437 job0: (groupid=0, jobs=1): err= 0: pid=1085096: Tue Oct 8 18:25:20 2024 00:09:26.437 read: IOPS=632, BW=2529KiB/s (2590kB/s)(2532KiB/1001msec) 00:09:26.437 slat (nsec): min=6388, max=45708, avg=23275.20, stdev=7590.50 00:09:26.437 clat (usec): min=314, max=907, avg=700.85, stdev=93.99 00:09:26.437 lat (usec): min=321, max=933, avg=724.13, stdev=96.02 00:09:26.437 clat percentiles (usec): 00:09:26.437 | 1.00th=[ 457], 5.00th=[ 545], 10.00th=[ 570], 20.00th=[ 619], 00:09:26.437 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 734], 00:09:26.437 | 70.00th=[ 766], 80.00th=[ 783], 90.00th=[ 816], 95.00th=[ 840], 00:09:26.437 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 906], 99.95th=[ 906], 00:09:26.437 | 99.99th=[ 906] 00:09:26.437 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:26.437 slat (nsec): min=8334, max=52095, avg=29536.84, stdev=8553.34 00:09:26.437 clat (usec): min=106, max=3301, avg=486.83, stdev=159.13 00:09:26.437 lat (usec): min=114, max=3309, avg=516.37, stdev=161.41 00:09:26.437 clat percentiles (usec): 00:09:26.437 | 1.00th=[ 202], 5.00th=[ 265], 10.00th=[ 318], 20.00th=[ 375], 00:09:26.437 | 30.00th=[ 412], 40.00th=[ 461], 50.00th=[ 494], 60.00th=[ 523], 00:09:26.437 | 70.00th=[ 562], 80.00th=[ 594], 90.00th=[ 635], 95.00th=[ 668], 00:09:26.437 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 2073], 99.95th=[ 3294], 00:09:26.437 | 99.99th=[ 3294] 00:09:26.437 bw ( KiB/s): min= 4096, max= 4096, per=35.83%, avg=4096.00, stdev= 0.00, samples=1 00:09:26.437 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:26.437 lat (usec) : 250=2.11%, 500=31.08%, 750=53.23%, 1000=13.46% 00:09:26.437 lat (msec) : 4=0.12% 00:09:26.437 cpu : usr=2.90%, sys=6.60%, ctx=1657, majf=0, minf=2 00:09:26.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.437 issued rwts: total=633,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.437 job1: (groupid=0, jobs=1): err= 0: pid=1085097: Tue Oct 8 18:25:20 2024 00:09:26.437 read: IOPS=499, BW=1996KiB/s (2044kB/s)(2000KiB/1002msec) 00:09:26.437 slat (nsec): min=7848, max=61258, avg=26765.99, stdev=3777.41 00:09:26.437 clat (usec): min=654, max=41963, avg=1334.88, stdev=3160.51 00:09:26.437 lat (usec): min=681, max=41989, avg=1361.64, stdev=3160.44 00:09:26.437 clat percentiles (usec): 00:09:26.437 | 1.00th=[ 791], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1012], 00:09:26.437 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:09:26.437 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1221], 00:09:26.437 | 99.00th=[ 1450], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:26.437 | 99.99th=[42206] 00:09:26.437 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:26.437 slat (nsec): min=8950, max=60262, avg=30339.62, stdev=9028.44 00:09:26.437 clat (usec): min=150, max=1004, avg=578.44, stdev=128.22 00:09:26.437 lat (usec): min=160, max=1042, avg=608.78, stdev=131.50 00:09:26.437 clat percentiles (usec): 00:09:26.437 | 1.00th=[ 277], 5.00th=[ 359], 10.00th=[ 404], 20.00th=[ 474], 00:09:26.437 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 611], 00:09:26.437 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 742], 95.00th=[ 783], 00:09:26.437 | 99.00th=[ 857], 99.50th=[ 898], 99.90th=[ 1004], 99.95th=[ 1004], 00:09:26.437 | 99.99th=[ 1004] 00:09:26.437 bw ( KiB/s): min= 4096, max= 4096, per=35.83%, avg=4096.00, stdev= 0.00, samples=1 00:09:26.437 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:26.437 lat (usec) : 250=0.40%, 500=12.55%, 750=34.19%, 1000=11.76% 00:09:26.437 lat (msec) : 2=40.71%, 4=0.10%, 50=0.30% 00:09:26.437 cpu : usr=1.80%, sys=4.30%, ctx=1012, majf=0, minf=2 00:09:26.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.437 issued rwts: total=500,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.437 job2: (groupid=0, jobs=1): err= 0: pid=1085113: Tue Oct 8 18:25:20 2024 00:09:26.437 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:26.437 slat (nsec): min=7312, max=50071, avg=26563.25, stdev=3076.17 00:09:26.437 clat (usec): min=519, max=1690, avg=968.56, stdev=135.59 00:09:26.437 lat (usec): min=546, max=1717, avg=995.12, stdev=136.03 00:09:26.437 clat percentiles (usec): 00:09:26.437 | 1.00th=[ 611], 5.00th=[ 725], 10.00th=[ 799], 20.00th=[ 873], 00:09:26.437 | 30.00th=[ 938], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1012], 00:09:26.437 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:09:26.437 | 99.00th=[ 1467], 99.50th=[ 1467], 99.90th=[ 1696], 99.95th=[ 1696], 00:09:26.437 | 99.99th=[ 1696] 00:09:26.437 write: IOPS=815, BW=3261KiB/s (3339kB/s)(3264KiB/1001msec); 0 zone resets 00:09:26.437 slat (nsec): min=4359, max=67622, avg=25388.52, stdev=11470.28 00:09:26.437 clat (usec): min=204, max=1035, avg=564.47, stdev=126.51 00:09:26.437 lat (usec): min=214, max=1054, avg=589.86, stdev=131.70 00:09:26.437 clat percentiles (usec): 00:09:26.437 | 1.00th=[ 285], 5.00th=[ 355], 10.00th=[ 400], 20.00th=[ 457], 00:09:26.437 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 594], 00:09:26.437 | 70.00th=[ 627], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:09:26.437 | 99.00th=[ 865], 99.50th=[ 947], 99.90th=[ 1037], 99.95th=[ 1037], 00:09:26.437 | 99.99th=[ 1037] 00:09:26.437 bw ( KiB/s): min= 4096, max= 4096, per=35.83%, avg=4096.00, stdev= 0.00, samples=1 00:09:26.437 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:26.437 lat (usec) : 250=0.23%, 500=19.05%, 750=41.04%, 1000=22.06% 00:09:26.437 lat (msec) : 2=17.62% 00:09:26.437 cpu : usr=1.50%, sys=3.80%, ctx=1330, majf=0, minf=1 00:09:26.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.437 issued rwts: total=512,816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.437 job3: (groupid=0, jobs=1): err= 0: pid=1085120: Tue Oct 8 18:25:20 2024 00:09:26.437 read: IOPS=233, BW=935KiB/s (958kB/s)(936KiB/1001msec) 00:09:26.437 slat (nsec): min=7923, max=51804, avg=26195.13, stdev=4338.23 00:09:26.437 clat (usec): min=733, max=42138, avg=3112.57, stdev=8966.45 00:09:26.437 lat (usec): min=760, max=42164, avg=3138.77, stdev=8966.20 00:09:26.437 clat percentiles (usec): 00:09:26.437 | 1.00th=[ 742], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 914], 00:09:26.437 | 30.00th=[ 963], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:09:26.437 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1303], 95.00th=[40633], 00:09:26.437 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:26.437 | 99.99th=[42206] 00:09:26.437 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:26.437 slat (nsec): min=9576, max=54974, avg=29229.37, stdev=9630.93 00:09:26.437 clat (usec): min=118, max=851, avg=476.02, stdev=147.46 00:09:26.437 lat (usec): min=128, max=883, avg=505.25, stdev=150.81 00:09:26.437 clat percentiles (usec): 00:09:26.437 | 1.00th=[ 194], 5.00th=[ 245], 10.00th=[ 285], 20.00th=[ 338], 00:09:26.437 | 30.00th=[ 383], 40.00th=[ 429], 50.00th=[ 465], 60.00th=[ 510], 00:09:26.437 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 734], 00:09:26.437 | 99.00th=[ 783], 99.50th=[ 824], 99.90th=[ 848], 99.95th=[ 848], 00:09:26.437 | 99.99th=[ 848] 00:09:26.437 bw ( KiB/s): min= 4096, max= 4096, per=35.83%, avg=4096.00, stdev= 0.00, samples=1 00:09:26.437 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:26.437 lat (usec) : 250=3.49%, 500=36.73%, 750=26.41%, 1000=15.82% 00:09:26.437 lat (msec) : 2=15.82%, 10=0.13%, 50=1.61% 00:09:26.437 cpu : usr=1.00%, sys=2.20%, ctx=748, majf=0, minf=1 00:09:26.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.437 issued rwts: total=234,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.437 00:09:26.437 Run status group 0 (all jobs): 00:09:26.437 READ: bw=7501KiB/s (7681kB/s), 935KiB/s-2529KiB/s (958kB/s-2590kB/s), io=7516KiB (7696kB), run=1001-1002msec 00:09:26.437 WRITE: bw=11.2MiB/s (11.7MB/s), 2044KiB/s-4092KiB/s (2093kB/s-4190kB/s), io=11.2MiB (11.7MB), run=1001-1002msec 00:09:26.437 00:09:26.437 Disk stats (read/write): 00:09:26.437 nvme0n1: ios=562/771, merge=0/0, ticks=341/278, in_queue=619, util=81.16% 00:09:26.437 nvme0n2: ios=462/512, merge=0/0, ticks=522/212, in_queue=734, util=86.34% 00:09:26.437 nvme0n3: ios=518/512, merge=0/0, ticks=605/282, in_queue=887, util=93.90% 00:09:26.437 nvme0n4: ios=89/512, merge=0/0, ticks=728/233, in_queue=961, util=100.00% 00:09:26.437 18:25:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:26.437 [global] 00:09:26.437 thread=1 00:09:26.437 invalidate=1 00:09:26.437 rw=write 00:09:26.437 time_based=1 00:09:26.437 runtime=1 00:09:26.437 ioengine=libaio 00:09:26.437 direct=1 00:09:26.437 bs=4096 00:09:26.437 iodepth=128 00:09:26.437 norandommap=0 00:09:26.437 numjobs=1 00:09:26.437 00:09:26.437 verify_dump=1 00:09:26.438 verify_backlog=512 00:09:26.438 verify_state_save=0 00:09:26.438 do_verify=1 00:09:26.438 verify=crc32c-intel 00:09:26.438 [job0] 00:09:26.438 filename=/dev/nvme0n1 00:09:26.438 [job1] 00:09:26.438 filename=/dev/nvme0n2 00:09:26.438 [job2] 00:09:26.438 filename=/dev/nvme0n3 00:09:26.438 [job3] 00:09:26.438 filename=/dev/nvme0n4 00:09:26.438 Could not set queue depth (nvme0n1) 00:09:26.438 Could not set queue depth (nvme0n2) 00:09:26.438 Could not set queue depth (nvme0n3) 00:09:26.438 Could not set queue depth (nvme0n4) 00:09:26.698 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.698 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.698 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.698 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.698 fio-3.35 00:09:26.698 Starting 4 threads 00:09:28.084 00:09:28.084 job0: (groupid=0, jobs=1): err= 0: pid=1085615: Tue Oct 8 18:25:21 2024 00:09:28.084 read: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec) 00:09:28.084 slat (nsec): min=923, max=6522.6k, avg=61462.76, stdev=387341.80 00:09:28.084 clat (usec): min=3093, max=18604, avg=7827.00, stdev=2007.37 00:09:28.084 lat (usec): min=3098, max=18669, avg=7888.46, stdev=2036.57 00:09:28.084 clat percentiles (usec): 00:09:28.084 | 1.00th=[ 4490], 5.00th=[ 5735], 10.00th=[ 6194], 20.00th=[ 6587], 00:09:28.084 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:09:28.084 | 70.00th=[ 7898], 80.00th=[ 8356], 90.00th=[10159], 95.00th=[12256], 00:09:28.084 | 99.00th=[16712], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:09:28.084 | 99.99th=[18482] 00:09:28.084 write: IOPS=8397, BW=32.8MiB/s (34.4MB/s)(32.9MiB/1004msec); 0 zone resets 00:09:28.084 slat (nsec): min=1609, max=10158k, avg=53605.22, stdev=269130.60 00:09:28.084 clat (usec): min=1240, max=24292, avg=7470.74, stdev=2153.48 00:09:28.084 lat (usec): min=1249, max=24300, avg=7524.35, stdev=2165.27 00:09:28.084 clat percentiles (usec): 00:09:28.084 | 1.00th=[ 4359], 5.00th=[ 5473], 10.00th=[ 6194], 20.00th=[ 6652], 00:09:28.084 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:09:28.084 | 70.00th=[ 7373], 80.00th=[ 7701], 90.00th=[ 8717], 95.00th=[10159], 00:09:28.085 | 99.00th=[19268], 99.50th=[20841], 99.90th=[23987], 99.95th=[24249], 00:09:28.085 | 99.99th=[24249] 00:09:28.085 bw ( KiB/s): min=32896, max=33536, per=33.90%, avg=33216.00, stdev=452.55, samples=2 00:09:28.085 iops : min= 8224, max= 8384, avg=8304.00, stdev=113.14, samples=2 00:09:28.085 lat (msec) : 2=0.03%, 4=0.61%, 10=91.34%, 20=7.60%, 50=0.42% 00:09:28.085 cpu : usr=3.79%, sys=6.58%, ctx=1084, majf=0, minf=1 00:09:28.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:28.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.085 issued rwts: total=8192,8431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.085 job1: (groupid=0, jobs=1): err= 0: pid=1085616: Tue Oct 8 18:25:21 2024 00:09:28.085 read: IOPS=3315, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:09:28.085 slat (nsec): min=959, max=22789k, avg=178179.03, stdev=1245833.12 00:09:28.085 clat (usec): min=2493, max=72297, avg=22877.11, stdev=15668.37 00:09:28.085 lat (usec): min=2502, max=74052, avg=23055.29, stdev=15779.33 00:09:28.085 clat percentiles (usec): 00:09:28.085 | 1.00th=[ 7177], 5.00th=[ 7308], 10.00th=[ 7373], 20.00th=[11469], 00:09:28.085 | 30.00th=[12780], 40.00th=[13173], 50.00th=[15401], 60.00th=[19530], 00:09:28.085 | 70.00th=[26870], 80.00th=[34866], 90.00th=[50070], 95.00th=[57410], 00:09:28.085 | 99.00th=[64750], 99.50th=[66323], 99.90th=[70779], 99.95th=[71828], 00:09:28.085 | 99.99th=[71828] 00:09:28.085 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:28.085 slat (nsec): min=1650, max=13508k, avg=104833.72, stdev=639436.41 00:09:28.085 clat (usec): min=1551, max=68636, avg=14169.87, stdev=9343.14 00:09:28.085 lat (usec): min=1560, max=68644, avg=14274.70, stdev=9415.00 00:09:28.085 clat percentiles (usec): 00:09:28.085 | 1.00th=[ 2278], 5.00th=[ 6783], 10.00th=[ 7177], 20.00th=[ 8586], 00:09:28.085 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[11731], 60.00th=[13435], 00:09:28.085 | 70.00th=[14484], 80.00th=[16057], 90.00th=[26346], 95.00th=[32375], 00:09:28.085 | 99.00th=[60031], 99.50th=[64226], 99.90th=[68682], 99.95th=[68682], 00:09:28.085 | 99.99th=[68682] 00:09:28.085 bw ( KiB/s): min=12288, max=16384, per=14.63%, avg=14336.00, stdev=2896.31, samples=2 00:09:28.085 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:28.085 lat (msec) : 2=0.36%, 4=0.82%, 10=24.90%, 20=47.26%, 50=21.00% 00:09:28.085 lat (msec) : 100=5.66% 00:09:28.085 cpu : usr=2.19%, sys=4.39%, ctx=312, majf=0, minf=1 00:09:28.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:28.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.085 issued rwts: total=3329,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.085 job2: (groupid=0, jobs=1): err= 0: pid=1085618: Tue Oct 8 18:25:21 2024 00:09:28.085 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:09:28.085 slat (nsec): min=974, max=9229.7k, avg=87418.22, stdev=598277.28 00:09:28.085 clat (usec): min=3736, max=33885, avg=10760.94, stdev=3653.91 00:09:28.085 lat (usec): min=3741, max=33887, avg=10848.36, stdev=3696.24 00:09:28.085 clat percentiles (usec): 00:09:28.085 | 1.00th=[ 5800], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 8717], 00:09:28.085 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:09:28.085 | 70.00th=[10945], 80.00th=[12518], 90.00th=[14615], 95.00th=[17957], 00:09:28.085 | 99.00th=[25297], 99.50th=[30016], 99.90th=[31327], 99.95th=[33817], 00:09:28.085 | 99.99th=[33817] 00:09:28.085 write: IOPS=5447, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1007msec); 0 zone resets 00:09:28.085 slat (nsec): min=1560, max=11428k, avg=95338.04, stdev=516860.47 00:09:28.085 clat (usec): min=1225, max=64074, avg=13220.02, stdev=8864.12 00:09:28.085 lat (usec): min=1236, max=64083, avg=13315.36, stdev=8918.23 00:09:28.085 clat percentiles (usec): 00:09:28.085 | 1.00th=[ 4228], 5.00th=[ 5080], 10.00th=[ 5800], 20.00th=[ 7373], 00:09:28.085 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[10683], 60.00th=[13960], 00:09:28.085 | 70.00th=[15926], 80.00th=[17171], 90.00th=[20579], 95.00th=[24773], 00:09:28.085 | 99.00th=[58459], 99.50th=[63177], 99.90th=[64226], 99.95th=[64226], 00:09:28.085 | 99.99th=[64226] 00:09:28.085 bw ( KiB/s): min=18312, max=24560, per=21.88%, avg=21436.00, stdev=4418.00, samples=2 00:09:28.085 iops : min= 4578, max= 6140, avg=5359.00, stdev=1104.50, samples=2 00:09:28.085 lat (msec) : 2=0.08%, 4=0.45%, 10=52.78%, 20=39.14%, 50=6.67% 00:09:28.085 lat (msec) : 100=0.89% 00:09:28.085 cpu : usr=4.08%, sys=5.57%, ctx=491, majf=0, minf=1 00:09:28.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:28.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.085 issued rwts: total=5120,5486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.085 job3: (groupid=0, jobs=1): err= 0: pid=1085621: Tue Oct 8 18:25:21 2024 00:09:28.085 read: IOPS=6910, BW=27.0MiB/s (28.3MB/s)(27.1MiB/1003msec) 00:09:28.085 slat (nsec): min=934, max=45253k, avg=65344.65, stdev=722261.00 00:09:28.085 clat (usec): min=2168, max=59698, avg=9507.24, stdev=6295.13 00:09:28.085 lat (usec): min=2174, max=59699, avg=9572.58, stdev=6325.84 00:09:28.085 clat percentiles (usec): 00:09:28.085 | 1.00th=[ 3654], 5.00th=[ 4686], 10.00th=[ 6783], 20.00th=[ 7439], 00:09:28.085 | 30.00th=[ 7898], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:09:28.085 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[11600], 95.00th=[13304], 00:09:28.085 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52691], 99.95th=[53740], 00:09:28.085 | 99.99th=[59507] 00:09:28.085 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:09:28.085 slat (nsec): min=1591, max=13969k, avg=46496.90, stdev=397959.84 00:09:28.085 clat (usec): min=474, max=52274, avg=8528.02, stdev=6135.21 00:09:28.085 lat (usec): min=793, max=52281, avg=8574.51, stdev=6156.97 00:09:28.085 clat percentiles (usec): 00:09:28.085 | 1.00th=[ 1876], 5.00th=[ 3228], 10.00th=[ 4228], 20.00th=[ 5342], 00:09:28.085 | 30.00th=[ 6194], 40.00th=[ 6980], 50.00th=[ 7504], 60.00th=[ 7898], 00:09:28.085 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[12256], 95.00th=[21103], 00:09:28.085 | 99.00th=[39584], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:09:28.085 | 99.99th=[52167] 00:09:28.085 bw ( KiB/s): min=27912, max=29432, per=29.26%, avg=28672.00, stdev=1074.80, samples=2 00:09:28.085 iops : min= 6978, max= 7358, avg=7168.00, stdev=268.70, samples=2 00:09:28.085 lat (usec) : 500=0.01%, 1000=0.04% 00:09:28.085 lat (msec) : 2=0.51%, 4=4.89%, 10=77.38%, 20=12.77%, 50=3.57% 00:09:28.085 lat (msec) : 100=0.82% 00:09:28.085 cpu : usr=5.39%, sys=9.08%, ctx=435, majf=0, minf=2 00:09:28.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:28.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.085 issued rwts: total=6931,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.085 00:09:28.085 Run status group 0 (all jobs): 00:09:28.085 READ: bw=91.4MiB/s (95.9MB/s), 13.0MiB/s-31.9MiB/s (13.6MB/s-33.4MB/s), io=92.1MiB (96.6MB), run=1003-1007msec 00:09:28.085 WRITE: bw=95.7MiB/s (100MB/s), 13.9MiB/s-32.8MiB/s (14.6MB/s-34.4MB/s), io=96.4MiB (101MB), run=1003-1007msec 00:09:28.085 00:09:28.085 Disk stats (read/write): 00:09:28.085 nvme0n1: ios=7215/7168, merge=0/0, ticks=30423/28096, in_queue=58519, util=84.07% 00:09:28.085 nvme0n2: ios=2759/3072, merge=0/0, ticks=25791/19806, in_queue=45597, util=88.58% 00:09:28.085 nvme0n3: ios=4153/4607, merge=0/0, ticks=40800/55619, in_queue=96419, util=95.04% 00:09:28.085 nvme0n4: ios=5689/5815, merge=0/0, ticks=44812/39630, in_queue=84442, util=97.22% 00:09:28.085 18:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:28.085 [global] 00:09:28.085 thread=1 00:09:28.085 invalidate=1 00:09:28.085 rw=randwrite 00:09:28.085 time_based=1 00:09:28.085 runtime=1 00:09:28.085 ioengine=libaio 00:09:28.085 direct=1 00:09:28.085 bs=4096 00:09:28.085 iodepth=128 00:09:28.085 norandommap=0 00:09:28.085 numjobs=1 00:09:28.085 00:09:28.085 verify_dump=1 00:09:28.085 verify_backlog=512 00:09:28.085 verify_state_save=0 00:09:28.085 do_verify=1 00:09:28.085 verify=crc32c-intel 00:09:28.085 [job0] 00:09:28.085 filename=/dev/nvme0n1 00:09:28.085 [job1] 00:09:28.085 filename=/dev/nvme0n2 00:09:28.085 [job2] 00:09:28.085 filename=/dev/nvme0n3 00:09:28.085 [job3] 00:09:28.085 filename=/dev/nvme0n4 00:09:28.085 Could not set queue depth (nvme0n1) 00:09:28.085 Could not set queue depth (nvme0n2) 00:09:28.085 Could not set queue depth (nvme0n3) 00:09:28.085 Could not set queue depth (nvme0n4) 00:09:28.345 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.345 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.345 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.345 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.345 fio-3.35 00:09:28.345 Starting 4 threads 00:09:29.729 00:09:29.730 job0: (groupid=0, jobs=1): err= 0: pid=1086145: Tue Oct 8 18:25:23 2024 00:09:29.730 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:09:29.730 slat (nsec): min=924, max=18728k, avg=89730.19, stdev=691008.00 00:09:29.730 clat (usec): min=3679, max=50254, avg=11577.00, stdev=8100.03 00:09:29.730 lat (usec): min=3698, max=51275, avg=11666.73, stdev=8166.73 00:09:29.730 clat percentiles (usec): 00:09:29.730 | 1.00th=[ 4293], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6456], 00:09:29.730 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 8586], 60.00th=[ 9503], 00:09:29.730 | 70.00th=[12125], 80.00th=[14746], 90.00th=[24511], 95.00th=[32637], 00:09:29.730 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:09:29.730 | 99.99th=[50070] 00:09:29.730 write: IOPS=5519, BW=21.6MiB/s (22.6MB/s)(21.7MiB/1006msec); 0 zone resets 00:09:29.730 slat (nsec): min=1638, max=16661k, avg=80912.20, stdev=580479.13 00:09:29.730 clat (usec): min=785, max=82479, avg=12085.65, stdev=11057.28 00:09:29.730 lat (usec): min=793, max=82487, avg=12166.56, stdev=11113.37 00:09:29.730 clat percentiles (usec): 00:09:29.730 | 1.00th=[ 2704], 5.00th=[ 3818], 10.00th=[ 4686], 20.00th=[ 5669], 00:09:29.730 | 30.00th=[ 6325], 40.00th=[ 8717], 50.00th=[ 9896], 60.00th=[10552], 00:09:29.730 | 70.00th=[11600], 80.00th=[13960], 90.00th=[17171], 95.00th=[35914], 00:09:29.730 | 99.00th=[62129], 99.50th=[66323], 99.90th=[76022], 99.95th=[80217], 00:09:29.730 | 99.99th=[82314] 00:09:29.730 bw ( KiB/s): min=17752, max=25648, per=25.09%, avg=21700.00, stdev=5583.32, samples=2 00:09:29.730 iops : min= 4438, max= 6412, avg=5425.00, stdev=1395.83, samples=2 00:09:29.730 lat (usec) : 1000=0.04% 00:09:29.730 lat (msec) : 2=0.15%, 4=3.10%, 10=54.03%, 20=33.04%, 50=7.91% 00:09:29.730 lat (msec) : 100=1.73% 00:09:29.730 cpu : usr=3.38%, sys=5.57%, ctx=467, majf=0, minf=1 00:09:29.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:29.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.730 issued rwts: total=5120,5553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.730 job1: (groupid=0, jobs=1): err= 0: pid=1086146: Tue Oct 8 18:25:23 2024 00:09:29.730 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:09:29.730 slat (nsec): min=952, max=21858k, avg=75388.96, stdev=669414.90 00:09:29.730 clat (usec): min=1185, max=58147, avg=10679.02, stdev=10206.96 00:09:29.730 lat (usec): min=1210, max=58154, avg=10754.41, stdev=10264.71 00:09:29.730 clat percentiles (usec): 00:09:29.730 | 1.00th=[ 2212], 5.00th=[ 4113], 10.00th=[ 5014], 20.00th=[ 5669], 00:09:29.730 | 30.00th=[ 6063], 40.00th=[ 6456], 50.00th=[ 7373], 60.00th=[ 8029], 00:09:29.730 | 70.00th=[10421], 80.00th=[12125], 90.00th=[20055], 95.00th=[32113], 00:09:29.730 | 99.00th=[53216], 99.50th=[57410], 99.90th=[57934], 99.95th=[57934], 00:09:29.730 | 99.99th=[57934] 00:09:29.730 write: IOPS=6962, BW=27.2MiB/s (28.5MB/s)(27.3MiB/1003msec); 0 zone resets 00:09:29.730 slat (nsec): min=1577, max=21540k, avg=59137.75, stdev=500739.76 00:09:29.730 clat (usec): min=640, max=37561, avg=7983.41, stdev=6180.36 00:09:29.730 lat (usec): min=648, max=37570, avg=8042.55, stdev=6216.48 00:09:29.730 clat percentiles (usec): 00:09:29.730 | 1.00th=[ 1614], 5.00th=[ 2737], 10.00th=[ 3490], 20.00th=[ 4490], 00:09:29.730 | 30.00th=[ 5342], 40.00th=[ 5669], 50.00th=[ 6325], 60.00th=[ 6783], 00:09:29.730 | 70.00th=[ 7635], 80.00th=[ 8979], 90.00th=[13960], 95.00th=[22938], 00:09:29.730 | 99.00th=[33162], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:09:29.730 | 99.99th=[37487] 00:09:29.730 bw ( KiB/s): min=17440, max=37400, per=31.70%, avg=27420.00, stdev=14113.85, samples=2 00:09:29.730 iops : min= 4360, max= 9350, avg=6855.00, stdev=3528.46, samples=2 00:09:29.730 lat (usec) : 750=0.02%, 1000=0.10% 00:09:29.730 lat (msec) : 2=1.06%, 4=10.09%, 10=65.62%, 20=15.21%, 50=6.31% 00:09:29.730 lat (msec) : 100=1.59% 00:09:29.730 cpu : usr=3.49%, sys=8.78%, ctx=462, majf=0, minf=1 00:09:29.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:29.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.730 issued rwts: total=6656,6983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.730 job2: (groupid=0, jobs=1): err= 0: pid=1086147: Tue Oct 8 18:25:23 2024 00:09:29.730 read: IOPS=5597, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1005msec) 00:09:29.730 slat (nsec): min=982, max=11052k, avg=89373.18, stdev=619215.63 00:09:29.730 clat (usec): min=3008, max=35345, avg=11045.19, stdev=4008.64 00:09:29.730 lat (usec): min=4332, max=35353, avg=11134.56, stdev=4056.16 00:09:29.730 clat percentiles (usec): 00:09:29.730 | 1.00th=[ 6718], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7898], 00:09:29.730 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10814], 00:09:29.730 | 70.00th=[11994], 80.00th=[13698], 90.00th=[15139], 95.00th=[18482], 00:09:29.730 | 99.00th=[25822], 99.50th=[28705], 99.90th=[31851], 99.95th=[35390], 00:09:29.730 | 99.99th=[35390] 00:09:29.730 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:09:29.730 slat (nsec): min=1600, max=11073k, avg=81504.19, stdev=493650.84 00:09:29.730 clat (usec): min=2321, max=42685, avg=11595.08, stdev=7396.02 00:09:29.730 lat (usec): min=2330, max=42693, avg=11676.59, stdev=7449.14 00:09:29.730 clat percentiles (usec): 00:09:29.730 | 1.00th=[ 4178], 5.00th=[ 4621], 10.00th=[ 5276], 20.00th=[ 6652], 00:09:29.730 | 30.00th=[ 6980], 40.00th=[ 7701], 50.00th=[ 9503], 60.00th=[11076], 00:09:29.730 | 70.00th=[13042], 80.00th=[13960], 90.00th=[19268], 95.00th=[31327], 00:09:29.730 | 99.00th=[36439], 99.50th=[38536], 99.90th=[41157], 99.95th=[42730], 00:09:29.730 | 99.99th=[42730] 00:09:29.730 bw ( KiB/s): min=20480, max=24576, per=26.05%, avg=22528.00, stdev=2896.31, samples=2 00:09:29.730 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:29.730 lat (msec) : 4=0.37%, 10=53.55%, 20=39.39%, 50=6.69% 00:09:29.730 cpu : usr=4.18%, sys=6.47%, ctx=430, majf=0, minf=2 00:09:29.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:29.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.730 issued rwts: total=5625,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.730 job3: (groupid=0, jobs=1): err= 0: pid=1086151: Tue Oct 8 18:25:23 2024 00:09:29.730 read: IOPS=3422, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1005msec) 00:09:29.730 slat (nsec): min=952, max=14127k, avg=133739.87, stdev=880090.73 00:09:29.730 clat (usec): min=1736, max=60685, avg=14867.70, stdev=10330.72 00:09:29.730 lat (usec): min=1744, max=60691, avg=15001.44, stdev=10453.47 00:09:29.730 clat percentiles (usec): 00:09:29.730 | 1.00th=[ 2507], 5.00th=[ 4948], 10.00th=[ 6783], 20.00th=[ 8291], 00:09:29.730 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[11207], 60.00th=[13435], 00:09:29.730 | 70.00th=[15664], 80.00th=[18744], 90.00th=[30016], 95.00th=[39060], 00:09:29.730 | 99.00th=[52167], 99.50th=[52167], 99.90th=[60556], 99.95th=[60556], 00:09:29.730 | 99.99th=[60556] 00:09:29.730 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:09:29.730 slat (nsec): min=1553, max=14724k, avg=140670.47, stdev=774351.57 00:09:29.730 clat (usec): min=486, max=74949, avg=21304.91, stdev=16332.93 00:09:29.730 lat (usec): min=659, max=74957, avg=21445.58, stdev=16433.00 00:09:29.730 clat percentiles (usec): 00:09:29.730 | 1.00th=[ 3589], 5.00th=[ 5669], 10.00th=[ 6915], 20.00th=[ 9896], 00:09:29.730 | 30.00th=[12125], 40.00th=[13304], 50.00th=[13960], 60.00th=[18482], 00:09:29.730 | 70.00th=[21627], 80.00th=[31589], 90.00th=[48497], 95.00th=[61604], 00:09:29.730 | 99.00th=[70779], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:09:29.730 | 99.99th=[74974] 00:09:29.730 bw ( KiB/s): min= 8208, max=20464, per=16.58%, avg=14336.00, stdev=8666.30, samples=2 00:09:29.730 iops : min= 2052, max= 5116, avg=3584.00, stdev=2166.58, samples=2 00:09:29.730 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.20% 00:09:29.730 lat (msec) : 2=0.34%, 4=1.49%, 10=30.64%, 20=40.45%, 50=21.46% 00:09:29.730 lat (msec) : 100=5.37% 00:09:29.730 cpu : usr=1.99%, sys=4.88%, ctx=375, majf=0, minf=2 00:09:29.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:29.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.730 issued rwts: total=3440,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.730 00:09:29.730 Run status group 0 (all jobs): 00:09:29.730 READ: bw=80.9MiB/s (84.9MB/s), 13.4MiB/s-25.9MiB/s (14.0MB/s-27.2MB/s), io=81.4MiB (85.4MB), run=1003-1006msec 00:09:29.730 WRITE: bw=84.5MiB/s (88.6MB/s), 13.9MiB/s-27.2MiB/s (14.6MB/s-28.5MB/s), io=85.0MiB (89.1MB), run=1003-1006msec 00:09:29.730 00:09:29.730 Disk stats (read/write): 00:09:29.730 nvme0n1: ios=4629/4977, merge=0/0, ticks=37849/44254, in_queue=82103, util=86.37% 00:09:29.730 nvme0n2: ios=4141/4220, merge=0/0, ticks=18967/12550, in_queue=31517, util=87.45% 00:09:29.730 nvme0n3: ios=4152/4455, merge=0/0, ticks=41916/50841, in_queue=92757, util=90.43% 00:09:29.730 nvme0n4: ios=3129/3239, merge=0/0, ticks=26897/35070, in_queue=61967, util=93.61% 00:09:29.730 18:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:29.730 18:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1086480 00:09:29.730 18:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:29.730 18:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:29.730 [global] 00:09:29.730 thread=1 00:09:29.730 invalidate=1 00:09:29.730 rw=read 00:09:29.730 time_based=1 00:09:29.730 runtime=10 00:09:29.730 ioengine=libaio 00:09:29.730 direct=1 00:09:29.730 bs=4096 00:09:29.730 iodepth=1 00:09:29.730 norandommap=1 00:09:29.730 numjobs=1 00:09:29.730 00:09:29.730 [job0] 00:09:29.730 filename=/dev/nvme0n1 00:09:29.730 [job1] 00:09:29.730 filename=/dev/nvme0n2 00:09:29.730 [job2] 00:09:29.730 filename=/dev/nvme0n3 00:09:29.730 [job3] 00:09:29.730 filename=/dev/nvme0n4 00:09:29.991 Could not set queue depth (nvme0n1) 00:09:29.991 Could not set queue depth (nvme0n2) 00:09:29.991 Could not set queue depth (nvme0n3) 00:09:29.991 Could not set queue depth (nvme0n4) 00:09:30.251 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.251 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.251 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.251 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.251 fio-3.35 00:09:30.251 Starting 4 threads 00:09:32.796 18:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:33.055 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=11845632, buflen=4096 00:09:33.055 fio: pid=1086679, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.055 18:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:33.055 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.055 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:33.055 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=372736, buflen=4096 00:09:33.055 fio: pid=1086676, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.315 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3682304, buflen=4096 00:09:33.315 fio: pid=1086672, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.315 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.315 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:33.575 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12267520, buflen=4096 00:09:33.575 fio: pid=1086673, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.575 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.575 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:33.575 00:09:33.575 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1086672: Tue Oct 8 18:25:27 2024 00:09:33.576 read: IOPS=308, BW=1234KiB/s (1264kB/s)(3596KiB/2914msec) 00:09:33.576 slat (usec): min=6, max=11204, avg=50.39, stdev=466.98 00:09:33.576 clat (usec): min=164, max=41937, avg=3160.57, stdev=9083.96 00:09:33.576 lat (usec): min=190, max=41961, avg=3210.99, stdev=9089.72 00:09:33.576 clat percentiles (usec): 00:09:33.576 | 1.00th=[ 537], 5.00th=[ 742], 10.00th=[ 816], 20.00th=[ 906], 00:09:33.576 | 30.00th=[ 971], 40.00th=[ 1012], 50.00th=[ 1045], 60.00th=[ 1074], 00:09:33.576 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1205], 95.00th=[41157], 00:09:33.576 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:33.576 | 99.99th=[41681] 00:09:33.576 bw ( KiB/s): min= 408, max= 2728, per=13.55%, avg=1208.00, stdev=917.81, samples=5 00:09:33.576 iops : min= 102, max= 682, avg=302.00, stdev=229.45, samples=5 00:09:33.576 lat (usec) : 250=0.11%, 500=0.56%, 750=4.56%, 1000=32.33% 00:09:33.576 lat (msec) : 2=57.00%, 50=5.33% 00:09:33.576 cpu : usr=0.41%, sys=0.82%, ctx=904, majf=0, minf=2 00:09:33.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.576 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.576 issued rwts: total=900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.576 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1086673: Tue Oct 8 18:25:27 2024 00:09:33.576 read: IOPS=970, BW=3882KiB/s (3975kB/s)(11.7MiB/3086msec) 00:09:33.576 slat (usec): min=6, max=24996, avg=44.39, stdev=578.96 00:09:33.576 clat (usec): min=352, max=6023, avg=976.06, stdev=162.86 00:09:33.576 lat (usec): min=377, max=26054, avg=1020.45, stdev=604.11 00:09:33.576 clat percentiles (usec): 00:09:33.576 | 1.00th=[ 627], 5.00th=[ 750], 10.00th=[ 791], 20.00th=[ 865], 00:09:33.576 | 30.00th=[ 914], 40.00th=[ 955], 50.00th=[ 988], 60.00th=[ 1012], 00:09:33.576 | 70.00th=[ 1045], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1188], 00:09:33.576 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1369], 99.95th=[ 1401], 00:09:33.576 | 99.99th=[ 5997] 00:09:33.576 bw ( KiB/s): min= 3536, max= 4232, per=43.63%, avg=3889.50, stdev=299.71, samples=6 00:09:33.576 iops : min= 884, max= 1058, avg=972.33, stdev=74.98, samples=6 00:09:33.576 lat (usec) : 500=0.10%, 750=4.97%, 1000=49.90% 00:09:33.576 lat (msec) : 2=44.96%, 10=0.03% 00:09:33.576 cpu : usr=0.97%, sys=2.95%, ctx=3001, majf=0, minf=1 00:09:33.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.576 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.576 issued rwts: total=2996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.576 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1086676: Tue Oct 8 18:25:27 2024 00:09:33.576 read: IOPS=33, BW=132KiB/s (135kB/s)(364KiB/2767msec) 00:09:33.576 slat (nsec): min=8707, max=61393, avg=26147.34, stdev=4285.82 00:09:33.576 clat (usec): min=529, max=42033, avg=30130.11, stdev=18096.50 00:09:33.576 lat (usec): min=555, max=42060, avg=30156.26, stdev=18095.54 00:09:33.576 clat percentiles (usec): 00:09:33.576 | 1.00th=[ 529], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 979], 00:09:33.576 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:33.576 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:09:33.576 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:33.576 | 99.99th=[42206] 00:09:33.576 bw ( KiB/s): min= 96, max= 144, per=1.26%, avg=112.00, stdev=20.40, samples=5 00:09:33.576 iops : min= 24, max= 36, avg=28.00, stdev= 5.10, samples=5 00:09:33.576 lat (usec) : 750=2.17%, 1000=20.65% 00:09:33.576 lat (msec) : 2=4.35%, 50=71.74% 00:09:33.576 cpu : usr=0.00%, sys=0.14%, ctx=92, majf=0, minf=2 00:09:33.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.576 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.576 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.576 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1086679: Tue Oct 8 18:25:27 2024 00:09:33.576 read: IOPS=1133, BW=4533KiB/s (4642kB/s)(11.3MiB/2552msec) 00:09:33.576 slat (nsec): min=6141, max=62050, avg=25867.08, stdev=5321.62 00:09:33.576 clat (usec): min=282, max=1214, avg=843.06, stdev=176.08 00:09:33.576 lat (usec): min=290, max=1259, avg=868.93, stdev=177.35 00:09:33.576 clat percentiles (usec): 00:09:33.576 | 1.00th=[ 457], 5.00th=[ 537], 10.00th=[ 594], 20.00th=[ 660], 00:09:33.576 | 30.00th=[ 725], 40.00th=[ 799], 50.00th=[ 889], 60.00th=[ 955], 00:09:33.576 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:09:33.576 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1205], 99.95th=[ 1205], 00:09:33.576 | 99.99th=[ 1221] 00:09:33.576 bw ( KiB/s): min= 3928, max= 5776, per=51.19%, avg=4563.20, stdev=784.91, samples=5 00:09:33.576 iops : min= 982, max= 1444, avg=1140.80, stdev=196.23, samples=5 00:09:33.576 lat (usec) : 500=2.39%, 750=31.52%, 1000=42.97% 00:09:33.576 lat (msec) : 2=23.09% 00:09:33.576 cpu : usr=2.12%, sys=4.23%, ctx=2893, majf=0, minf=2 00:09:33.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.576 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.576 issued rwts: total=2893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.576 00:09:33.576 Run status group 0 (all jobs): 00:09:33.576 READ: bw=8914KiB/s (9128kB/s), 132KiB/s-4533KiB/s (135kB/s-4642kB/s), io=26.9MiB (28.2MB), run=2552-3086msec 00:09:33.576 00:09:33.576 Disk stats (read/write): 00:09:33.576 nvme0n1: ios=789/0, merge=0/0, ticks=2694/0, in_queue=2694, util=92.49% 00:09:33.576 nvme0n2: ios=2937/0, merge=0/0, ticks=2852/0, in_queue=2852, util=92.56% 00:09:33.576 nvme0n3: ios=70/0, merge=0/0, ticks=2485/0, in_queue=2485, util=95.46% 00:09:33.576 nvme0n4: ios=2885/0, merge=0/0, ticks=2243/0, in_queue=2243, util=96.34% 00:09:33.576 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.576 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:33.836 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.836 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:34.095 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.095 18:25:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:34.355 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.355 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:34.355 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:34.355 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1086480 00:09:34.355 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:34.355 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:34.615 nvmf hotplug test: fio failed as expected 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.615 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.615 rmmod nvme_tcp 00:09:34.875 rmmod nvme_fabrics 00:09:34.875 rmmod nvme_keyring 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1082885 ']' 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1082885 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1082885 ']' 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1082885 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1082885 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.875 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1082885' 00:09:34.875 killing process with pid 1082885 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1082885 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1082885 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.876 18:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.422 18:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.422 00:09:37.422 real 0m29.574s 00:09:37.422 user 2m27.848s 00:09:37.422 sys 0m9.995s 00:09:37.422 18:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.422 18:25:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.422 ************************************ 00:09:37.422 END TEST nvmf_fio_target 00:09:37.422 ************************************ 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.422 ************************************ 00:09:37.422 START TEST nvmf_bdevio 00:09:37.422 ************************************ 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:37.422 * Looking for test storage... 00:09:37.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:37.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.422 --rc genhtml_branch_coverage=1 00:09:37.422 --rc genhtml_function_coverage=1 00:09:37.422 --rc genhtml_legend=1 00:09:37.422 --rc geninfo_all_blocks=1 00:09:37.422 --rc geninfo_unexecuted_blocks=1 00:09:37.422 00:09:37.422 ' 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:37.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.422 --rc genhtml_branch_coverage=1 00:09:37.422 --rc genhtml_function_coverage=1 00:09:37.422 --rc genhtml_legend=1 00:09:37.422 --rc geninfo_all_blocks=1 00:09:37.422 --rc geninfo_unexecuted_blocks=1 00:09:37.422 00:09:37.422 ' 00:09:37.422 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:37.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.422 --rc genhtml_branch_coverage=1 00:09:37.422 --rc genhtml_function_coverage=1 00:09:37.422 --rc genhtml_legend=1 00:09:37.422 --rc geninfo_all_blocks=1 00:09:37.422 --rc geninfo_unexecuted_blocks=1 00:09:37.423 00:09:37.423 ' 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:37.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.423 --rc genhtml_branch_coverage=1 00:09:37.423 --rc genhtml_function_coverage=1 00:09:37.423 --rc genhtml_legend=1 00:09:37.423 --rc geninfo_all_blocks=1 00:09:37.423 --rc geninfo_unexecuted_blocks=1 00:09:37.423 00:09:37.423 ' 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.423 18:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:45.568 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:45.568 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:45.568 Found net devices under 0000:31:00.0: cvl_0_0 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:45.568 Found net devices under 0000:31:00.1: cvl_0_1 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:09:45.568 00:09:45.568 --- 10.0.0.2 ping statistics --- 00:09:45.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.568 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:09:45.568 00:09:45.568 --- 10.0.0.1 ping statistics --- 00:09:45.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.568 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:09:45.568 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1092070 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1092070 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1092070 ']' 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.569 18:25:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.569 [2024-10-08 18:25:39.016832] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:09:45.569 [2024-10-08 18:25:39.016902] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.569 [2024-10-08 18:25:39.104787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.569 [2024-10-08 18:25:39.193870] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.569 [2024-10-08 18:25:39.193935] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.569 [2024-10-08 18:25:39.193944] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.569 [2024-10-08 18:25:39.193951] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.569 [2024-10-08 18:25:39.193957] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.569 [2024-10-08 18:25:39.195997] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:09:45.569 [2024-10-08 18:25:39.196138] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:09:45.569 [2024-10-08 18:25:39.196409] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:09:45.569 [2024-10-08 18:25:39.196411] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.832 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.832 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:45.832 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:45.832 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.832 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.093 [2024-10-08 18:25:39.896369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.093 Malloc0 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.093 [2024-10-08 18:25:39.961153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:46.093 { 00:09:46.093 "params": { 00:09:46.093 "name": "Nvme$subsystem", 00:09:46.093 "trtype": "$TEST_TRANSPORT", 00:09:46.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.093 "adrfam": "ipv4", 00:09:46.093 "trsvcid": "$NVMF_PORT", 00:09:46.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.093 "hdgst": ${hdgst:-false}, 00:09:46.093 "ddgst": ${ddgst:-false} 00:09:46.093 }, 00:09:46.093 "method": "bdev_nvme_attach_controller" 00:09:46.093 } 00:09:46.093 EOF 00:09:46.093 )") 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:09:46.093 18:25:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:46.093 "params": { 00:09:46.093 "name": "Nvme1", 00:09:46.093 "trtype": "tcp", 00:09:46.093 "traddr": "10.0.0.2", 00:09:46.093 "adrfam": "ipv4", 00:09:46.093 "trsvcid": "4420", 00:09:46.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.093 "hdgst": false, 00:09:46.093 "ddgst": false 00:09:46.093 }, 00:09:46.093 "method": "bdev_nvme_attach_controller" 00:09:46.093 }' 00:09:46.093 [2024-10-08 18:25:40.019327] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:09:46.093 [2024-10-08 18:25:40.019400] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092134 ] 00:09:46.093 [2024-10-08 18:25:40.108520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:46.353 [2024-10-08 18:25:40.209272] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.353 [2024-10-08 18:25:40.209437] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.353 [2024-10-08 18:25:40.209438] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.353 I/O targets: 00:09:46.353 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:46.353 00:09:46.353 00:09:46.353 CUnit - A unit testing framework for C - Version 2.1-3 00:09:46.353 http://cunit.sourceforge.net/ 00:09:46.353 00:09:46.353 00:09:46.353 Suite: bdevio tests on: Nvme1n1 00:09:46.612 Test: blockdev write read block ...passed 00:09:46.612 Test: blockdev write zeroes read block ...passed 00:09:46.613 Test: blockdev write zeroes read no split ...passed 00:09:46.613 Test: blockdev write zeroes read split ...passed 00:09:46.613 Test: blockdev write zeroes read split partial ...passed 00:09:46.613 Test: blockdev reset ...[2024-10-08 18:25:40.557893] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:46.613 [2024-10-08 18:25:40.558005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb7000 (9): Bad file descriptor 00:09:46.613 [2024-10-08 18:25:40.661727] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:46.613 passed 00:09:46.872 Test: blockdev write read 8 blocks ...passed 00:09:46.872 Test: blockdev write read size > 128k ...passed 00:09:46.872 Test: blockdev write read invalid size ...passed 00:09:46.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:46.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:46.872 Test: blockdev write read max offset ...passed 00:09:46.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:46.872 Test: blockdev writev readv 8 blocks ...passed 00:09:46.872 Test: blockdev writev readv 30 x 1block ...passed 00:09:47.132 Test: blockdev writev readv block ...passed 00:09:47.132 Test: blockdev writev readv size > 128k ...passed 00:09:47.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:47.132 Test: blockdev comparev and writev ...[2024-10-08 18:25:40.968895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.133 [2024-10-08 18:25:40.968939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:47.133 [2024-10-08 18:25:40.968955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.133 [2024-10-08 18:25:40.968964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:47.133 [2024-10-08 18:25:40.969487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.133 [2024-10-08 18:25:40.969499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:47.133 [2024-10-08 18:25:40.969513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.133 [2024-10-08 18:25:40.969521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:47.133 [2024-10-08 18:25:40.970046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.133 [2024-10-08 18:25:40.970057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:47.133 [2024-10-08 18:25:40.970071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.133 [2024-10-08 18:25:40.970079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:47.133 [2024-10-08 18:25:40.970604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.133 [2024-10-08 18:25:40.970614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:47.133 [2024-10-08 18:25:40.970628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.133 [2024-10-08 18:25:40.970636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:47.133 passed 00:09:47.133 Test: blockdev nvme passthru rw ...passed 00:09:47.133 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:25:41.054843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.133 [2024-10-08 18:25:41.054857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:47.133 [2024-10-08 18:25:41.055237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.133 [2024-10-08 18:25:41.055248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:47.133 [2024-10-08 18:25:41.055563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.133 [2024-10-08 18:25:41.055573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:47.133 [2024-10-08 18:25:41.055897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.133 [2024-10-08 18:25:41.055908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:47.133 passed 00:09:47.133 Test: blockdev nvme admin passthru ...passed 00:09:47.133 Test: blockdev copy ...passed 00:09:47.133 00:09:47.133 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.133 suites 1 1 n/a 0 0 00:09:47.133 tests 23 23 23 0 0 00:09:47.133 asserts 152 152 152 0 n/a 00:09:47.133 00:09:47.133 Elapsed time = 1.463 seconds 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.393 rmmod nvme_tcp 00:09:47.393 rmmod nvme_fabrics 00:09:47.393 rmmod nvme_keyring 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1092070 ']' 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1092070 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1092070 ']' 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1092070 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1092070 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1092070' 00:09:47.393 killing process with pid 1092070 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1092070 00:09:47.393 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1092070 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.654 18:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.568 18:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.568 00:09:49.568 real 0m12.523s 00:09:49.568 user 0m13.878s 00:09:49.568 sys 0m6.318s 00:09:49.568 18:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.568 18:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.568 ************************************ 00:09:49.568 END TEST nvmf_bdevio 00:09:49.568 ************************************ 00:09:49.829 18:25:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:49.829 00:09:49.829 real 5m7.931s 00:09:49.829 user 11m46.847s 00:09:49.829 sys 1m54.635s 00:09:49.829 18:25:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.829 18:25:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.829 ************************************ 00:09:49.829 END TEST nvmf_target_core 00:09:49.829 ************************************ 00:09:49.829 18:25:43 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:49.829 18:25:43 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:49.829 18:25:43 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.829 18:25:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.829 ************************************ 00:09:49.829 START TEST nvmf_target_extra 00:09:49.829 ************************************ 00:09:49.829 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:49.829 * Looking for test storage... 00:09:49.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:49.829 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:49.829 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:09:49.829 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:50.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.091 --rc genhtml_branch_coverage=1 00:09:50.091 --rc genhtml_function_coverage=1 00:09:50.091 --rc genhtml_legend=1 00:09:50.091 --rc geninfo_all_blocks=1 00:09:50.091 --rc geninfo_unexecuted_blocks=1 00:09:50.091 00:09:50.091 ' 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:50.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.091 --rc genhtml_branch_coverage=1 00:09:50.091 --rc genhtml_function_coverage=1 00:09:50.091 --rc genhtml_legend=1 00:09:50.091 --rc geninfo_all_blocks=1 00:09:50.091 --rc geninfo_unexecuted_blocks=1 00:09:50.091 00:09:50.091 ' 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:50.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.091 --rc genhtml_branch_coverage=1 00:09:50.091 --rc genhtml_function_coverage=1 00:09:50.091 --rc genhtml_legend=1 00:09:50.091 --rc geninfo_all_blocks=1 00:09:50.091 --rc geninfo_unexecuted_blocks=1 00:09:50.091 00:09:50.091 ' 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:50.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.091 --rc genhtml_branch_coverage=1 00:09:50.091 --rc genhtml_function_coverage=1 00:09:50.091 --rc genhtml_legend=1 00:09:50.091 --rc geninfo_all_blocks=1 00:09:50.091 --rc geninfo_unexecuted_blocks=1 00:09:50.091 00:09:50.091 ' 00:09:50.091 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.092 18:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:50.092 ************************************ 00:09:50.092 START TEST nvmf_example 00:09:50.092 ************************************ 00:09:50.092 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:50.092 * Looking for test storage... 00:09:50.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.092 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:50.092 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:09:50.092 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:50.354 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:50.354 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.354 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.354 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.354 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.354 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.354 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:50.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.355 --rc genhtml_branch_coverage=1 00:09:50.355 --rc genhtml_function_coverage=1 00:09:50.355 --rc genhtml_legend=1 00:09:50.355 --rc geninfo_all_blocks=1 00:09:50.355 --rc geninfo_unexecuted_blocks=1 00:09:50.355 00:09:50.355 ' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:50.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.355 --rc genhtml_branch_coverage=1 00:09:50.355 --rc genhtml_function_coverage=1 00:09:50.355 --rc genhtml_legend=1 00:09:50.355 --rc geninfo_all_blocks=1 00:09:50.355 --rc geninfo_unexecuted_blocks=1 00:09:50.355 00:09:50.355 ' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:50.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.355 --rc genhtml_branch_coverage=1 00:09:50.355 --rc genhtml_function_coverage=1 00:09:50.355 --rc genhtml_legend=1 00:09:50.355 --rc geninfo_all_blocks=1 00:09:50.355 --rc geninfo_unexecuted_blocks=1 00:09:50.355 00:09:50.355 ' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:50.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.355 --rc genhtml_branch_coverage=1 00:09:50.355 --rc genhtml_function_coverage=1 00:09:50.355 --rc genhtml_legend=1 00:09:50.355 --rc geninfo_all_blocks=1 00:09:50.355 --rc geninfo_unexecuted_blocks=1 00:09:50.355 00:09:50.355 ' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:50.355 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:50.356 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.356 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.356 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.356 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:50.356 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:50.356 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.356 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:58.503 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:58.504 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:58.504 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:58.504 Found net devices under 0000:31:00.0: cvl_0_0 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:58.504 Found net devices under 0000:31:00.1: cvl_0_1 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:58.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:09:58.504 00:09:58.504 --- 10.0.0.2 ping statistics --- 00:09:58.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.504 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:09:58.504 00:09:58.504 --- 10.0.0.1 ping statistics --- 00:09:58.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.504 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1096926 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1096926 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1096926 ']' 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:58.504 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.077 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.077 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:09:59.077 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:59.078 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:09.084 Initializing NVMe Controllers 00:10:09.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:09.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:09.084 Initialization complete. Launching workers. 00:10:09.084 ======================================================== 00:10:09.084 Latency(us) 00:10:09.084 Device Information : IOPS MiB/s Average min max 00:10:09.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18974.86 74.12 3372.55 631.82 15499.14 00:10:09.084 ======================================================== 00:10:09.084 Total : 18974.86 74.12 3372.55 631.82 15499.14 00:10:09.084 00:10:09.084 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:09.084 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:09.084 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:09.084 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:09.084 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.084 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:09.084 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.084 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.395 rmmod nvme_tcp 00:10:09.395 rmmod nvme_fabrics 00:10:09.395 rmmod nvme_keyring 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1096926 ']' 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1096926 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1096926 ']' 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1096926 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1096926 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1096926' 00:10:09.395 killing process with pid 1096926 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1096926 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1096926 00:10:09.395 nvmf threads initialize successfully 00:10:09.395 bdev subsystem init successfully 00:10:09.395 created a nvmf target service 00:10:09.395 create targets's poll groups done 00:10:09.395 all subsystems of target started 00:10:09.395 nvmf target is running 00:10:09.395 all subsystems of target stopped 00:10:09.395 destroy targets's poll groups done 00:10:09.395 destroyed the nvmf target service 00:10:09.395 bdev subsystem finish successfully 00:10:09.395 nvmf threads destroy successfully 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.395 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.942 00:10:11.942 real 0m21.527s 00:10:11.942 user 0m46.492s 00:10:11.942 sys 0m7.040s 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.942 ************************************ 00:10:11.942 END TEST nvmf_example 00:10:11.942 ************************************ 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:11.942 ************************************ 00:10:11.942 START TEST nvmf_filesystem 00:10:11.942 ************************************ 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:11.942 * Looking for test storage... 00:10:11.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.942 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.942 --rc genhtml_branch_coverage=1 00:10:11.942 --rc genhtml_function_coverage=1 00:10:11.942 --rc genhtml_legend=1 00:10:11.943 --rc geninfo_all_blocks=1 00:10:11.943 --rc geninfo_unexecuted_blocks=1 00:10:11.943 00:10:11.943 ' 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:11.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.943 --rc genhtml_branch_coverage=1 00:10:11.943 --rc genhtml_function_coverage=1 00:10:11.943 --rc genhtml_legend=1 00:10:11.943 --rc geninfo_all_blocks=1 00:10:11.943 --rc geninfo_unexecuted_blocks=1 00:10:11.943 00:10:11.943 ' 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:11.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.943 --rc genhtml_branch_coverage=1 00:10:11.943 --rc genhtml_function_coverage=1 00:10:11.943 --rc genhtml_legend=1 00:10:11.943 --rc geninfo_all_blocks=1 00:10:11.943 --rc geninfo_unexecuted_blocks=1 00:10:11.943 00:10:11.943 ' 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:11.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.943 --rc genhtml_branch_coverage=1 00:10:11.943 --rc genhtml_function_coverage=1 00:10:11.943 --rc genhtml_legend=1 00:10:11.943 --rc geninfo_all_blocks=1 00:10:11.943 --rc geninfo_unexecuted_blocks=1 00:10:11.943 00:10:11.943 ' 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:11.943 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:11.944 #define SPDK_CONFIG_H 00:10:11.944 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:11.944 #define SPDK_CONFIG_APPS 1 00:10:11.944 #define SPDK_CONFIG_ARCH native 00:10:11.944 #undef SPDK_CONFIG_ASAN 00:10:11.944 #undef SPDK_CONFIG_AVAHI 00:10:11.944 #undef SPDK_CONFIG_CET 00:10:11.944 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:11.944 #define SPDK_CONFIG_COVERAGE 1 00:10:11.944 #define SPDK_CONFIG_CROSS_PREFIX 00:10:11.944 #undef SPDK_CONFIG_CRYPTO 00:10:11.944 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:11.944 #undef SPDK_CONFIG_CUSTOMOCF 00:10:11.944 #undef SPDK_CONFIG_DAOS 00:10:11.944 #define SPDK_CONFIG_DAOS_DIR 00:10:11.944 #define SPDK_CONFIG_DEBUG 1 00:10:11.944 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:11.944 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:11.944 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:11.944 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:11.944 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:11.944 #undef SPDK_CONFIG_DPDK_UADK 00:10:11.944 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:11.944 #define SPDK_CONFIG_EXAMPLES 1 00:10:11.944 #undef SPDK_CONFIG_FC 00:10:11.944 #define SPDK_CONFIG_FC_PATH 00:10:11.944 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:11.944 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:11.944 #define SPDK_CONFIG_FSDEV 1 00:10:11.944 #undef SPDK_CONFIG_FUSE 00:10:11.944 #undef SPDK_CONFIG_FUZZER 00:10:11.944 #define SPDK_CONFIG_FUZZER_LIB 00:10:11.944 #undef SPDK_CONFIG_GOLANG 00:10:11.944 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:11.944 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:11.944 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:11.944 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:11.944 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:11.944 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:11.944 #undef SPDK_CONFIG_HAVE_LZ4 00:10:11.944 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:11.944 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:11.944 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:11.944 #define SPDK_CONFIG_IDXD 1 00:10:11.944 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:11.944 #undef SPDK_CONFIG_IPSEC_MB 00:10:11.944 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:11.944 #define SPDK_CONFIG_ISAL 1 00:10:11.944 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:11.944 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:11.944 #define SPDK_CONFIG_LIBDIR 00:10:11.944 #undef SPDK_CONFIG_LTO 00:10:11.944 #define SPDK_CONFIG_MAX_LCORES 128 00:10:11.944 #define SPDK_CONFIG_NVME_CUSE 1 00:10:11.944 #undef SPDK_CONFIG_OCF 00:10:11.944 #define SPDK_CONFIG_OCF_PATH 00:10:11.944 #define SPDK_CONFIG_OPENSSL_PATH 00:10:11.944 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:11.944 #define SPDK_CONFIG_PGO_DIR 00:10:11.944 #undef SPDK_CONFIG_PGO_USE 00:10:11.944 #define SPDK_CONFIG_PREFIX /usr/local 00:10:11.944 #undef SPDK_CONFIG_RAID5F 00:10:11.944 #undef SPDK_CONFIG_RBD 00:10:11.944 #define SPDK_CONFIG_RDMA 1 00:10:11.944 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:11.944 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:11.944 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:11.944 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:11.944 #define SPDK_CONFIG_SHARED 1 00:10:11.944 #undef SPDK_CONFIG_SMA 00:10:11.944 #define SPDK_CONFIG_TESTS 1 00:10:11.944 #undef SPDK_CONFIG_TSAN 00:10:11.944 #define SPDK_CONFIG_UBLK 1 00:10:11.944 #define SPDK_CONFIG_UBSAN 1 00:10:11.944 #undef SPDK_CONFIG_UNIT_TESTS 00:10:11.944 #undef SPDK_CONFIG_URING 00:10:11.944 #define SPDK_CONFIG_URING_PATH 00:10:11.944 #undef SPDK_CONFIG_URING_ZNS 00:10:11.944 #undef SPDK_CONFIG_USDT 00:10:11.944 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:11.944 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:11.944 #define SPDK_CONFIG_VFIO_USER 1 00:10:11.944 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:11.944 #define SPDK_CONFIG_VHOST 1 00:10:11.944 #define SPDK_CONFIG_VIRTIO 1 00:10:11.944 #undef SPDK_CONFIG_VTUNE 00:10:11.944 #define SPDK_CONFIG_VTUNE_DIR 00:10:11.944 #define SPDK_CONFIG_WERROR 1 00:10:11.944 #define SPDK_CONFIG_WPDK_DIR 00:10:11.944 #undef SPDK_CONFIG_XNVME 00:10:11.944 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:11.944 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:11.945 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:11.946 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1099713 ]] 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1099713 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.vM5qSY 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vM5qSY/tests/target /tmp/spdk.vM5qSY 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=156295168 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5128134656 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=123534172160 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356529664 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5822357504 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668233728 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678264832 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847894016 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23412736 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64678051840 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678264832 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=212992 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:11.947 * Looking for test storage... 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=123534172160 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8036950016 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.947 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:11.948 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.208 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:12.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.209 --rc genhtml_branch_coverage=1 00:10:12.209 --rc genhtml_function_coverage=1 00:10:12.209 --rc genhtml_legend=1 00:10:12.209 --rc geninfo_all_blocks=1 00:10:12.209 --rc geninfo_unexecuted_blocks=1 00:10:12.209 00:10:12.209 ' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:12.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.209 --rc genhtml_branch_coverage=1 00:10:12.209 --rc genhtml_function_coverage=1 00:10:12.209 --rc genhtml_legend=1 00:10:12.209 --rc geninfo_all_blocks=1 00:10:12.209 --rc geninfo_unexecuted_blocks=1 00:10:12.209 00:10:12.209 ' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:12.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.209 --rc genhtml_branch_coverage=1 00:10:12.209 --rc genhtml_function_coverage=1 00:10:12.209 --rc genhtml_legend=1 00:10:12.209 --rc geninfo_all_blocks=1 00:10:12.209 --rc geninfo_unexecuted_blocks=1 00:10:12.209 00:10:12.209 ' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:12.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.209 --rc genhtml_branch_coverage=1 00:10:12.209 --rc genhtml_function_coverage=1 00:10:12.209 --rc genhtml_legend=1 00:10:12.209 --rc geninfo_all_blocks=1 00:10:12.209 --rc geninfo_unexecuted_blocks=1 00:10:12.209 00:10:12.209 ' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.209 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.357 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:20.358 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:20.358 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:20.358 Found net devices under 0000:31:00.0: cvl_0_0 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:20.358 Found net devices under 0000:31:00.1: cvl_0_1 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:10:20.358 00:10:20.358 --- 10.0.0.2 ping statistics --- 00:10:20.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.358 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:10:20.358 00:10:20.358 --- 10.0.0.1 ping statistics --- 00:10:20.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.358 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:20.358 ************************************ 00:10:20.358 START TEST nvmf_filesystem_no_in_capsule 00:10:20.358 ************************************ 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:20.358 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1103646 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1103646 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1103646 ']' 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.359 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.359 [2024-10-08 18:26:13.907869] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:10:20.359 [2024-10-08 18:26:13.907930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.359 [2024-10-08 18:26:13.999249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.359 [2024-10-08 18:26:14.094476] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.359 [2024-10-08 18:26:14.094556] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.359 [2024-10-08 18:26:14.094565] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.359 [2024-10-08 18:26:14.094572] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.359 [2024-10-08 18:26:14.094579] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.359 [2024-10-08 18:26:14.096721] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.359 [2024-10-08 18:26:14.096883] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.359 [2024-10-08 18:26:14.097040] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.359 [2024-10-08 18:26:14.097039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.932 [2024-10-08 18:26:14.782917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.932 Malloc1 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.932 [2024-10-08 18:26:14.931753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:20.932 { 00:10:20.932 "name": "Malloc1", 00:10:20.932 "aliases": [ 00:10:20.932 "e379727d-2382-44d6-b774-075925e40de3" 00:10:20.932 ], 00:10:20.932 "product_name": "Malloc disk", 00:10:20.932 "block_size": 512, 00:10:20.932 "num_blocks": 1048576, 00:10:20.932 "uuid": "e379727d-2382-44d6-b774-075925e40de3", 00:10:20.932 "assigned_rate_limits": { 00:10:20.932 "rw_ios_per_sec": 0, 00:10:20.932 "rw_mbytes_per_sec": 0, 00:10:20.932 "r_mbytes_per_sec": 0, 00:10:20.932 "w_mbytes_per_sec": 0 00:10:20.932 }, 00:10:20.932 "claimed": true, 00:10:20.932 "claim_type": "exclusive_write", 00:10:20.932 "zoned": false, 00:10:20.932 "supported_io_types": { 00:10:20.932 "read": true, 00:10:20.932 "write": true, 00:10:20.932 "unmap": true, 00:10:20.932 "flush": true, 00:10:20.932 "reset": true, 00:10:20.932 "nvme_admin": false, 00:10:20.932 "nvme_io": false, 00:10:20.932 "nvme_io_md": false, 00:10:20.932 "write_zeroes": true, 00:10:20.932 "zcopy": true, 00:10:20.932 "get_zone_info": false, 00:10:20.932 "zone_management": false, 00:10:20.932 "zone_append": false, 00:10:20.932 "compare": false, 00:10:20.932 "compare_and_write": false, 00:10:20.932 "abort": true, 00:10:20.932 "seek_hole": false, 00:10:20.932 "seek_data": false, 00:10:20.932 "copy": true, 00:10:20.932 "nvme_iov_md": false 00:10:20.932 }, 00:10:20.932 "memory_domains": [ 00:10:20.932 { 00:10:20.932 "dma_device_id": "system", 00:10:20.932 "dma_device_type": 1 00:10:20.932 }, 00:10:20.932 { 00:10:20.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.932 "dma_device_type": 2 00:10:20.932 } 00:10:20.932 ], 00:10:20.932 "driver_specific": {} 00:10:20.932 } 00:10:20.932 ]' 00:10:20.932 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:21.194 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:21.194 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:21.194 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:21.194 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:21.194 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:21.194 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:21.194 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.591 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.591 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:22.591 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.591 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:22.591 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:25.136 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:25.707 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.649 ************************************ 00:10:26.649 START TEST filesystem_ext4 00:10:26.649 ************************************ 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:26.649 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:26.649 mke2fs 1.47.0 (5-Feb-2023) 00:10:26.649 Discarding device blocks: 0/522240 done 00:10:26.649 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:26.649 Filesystem UUID: 3a02ba70-78de-48b6-95e8-4d4ad7434363 00:10:26.649 Superblock backups stored on blocks: 00:10:26.649 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:26.649 00:10:26.649 Allocating group tables: 0/64 done 00:10:26.649 Writing inode tables: 0/64 done 00:10:29.948 Creating journal (8192 blocks): done 00:10:29.948 Writing superblocks and filesystem accounting information: 0/64 done 00:10:29.948 00:10:29.948 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:29.948 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1103646 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:36.539 00:10:36.539 real 0m8.985s 00:10:36.539 user 0m0.036s 00:10:36.539 sys 0m0.077s 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.539 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:36.539 ************************************ 00:10:36.539 END TEST filesystem_ext4 00:10:36.539 ************************************ 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.540 ************************************ 00:10:36.540 START TEST filesystem_btrfs 00:10:36.540 ************************************ 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:36.540 btrfs-progs v6.8.1 00:10:36.540 See https://btrfs.readthedocs.io for more information. 00:10:36.540 00:10:36.540 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:36.540 NOTE: several default settings have changed in version 5.15, please make sure 00:10:36.540 this does not affect your deployments: 00:10:36.540 - DUP for metadata (-m dup) 00:10:36.540 - enabled no-holes (-O no-holes) 00:10:36.540 - enabled free-space-tree (-R free-space-tree) 00:10:36.540 00:10:36.540 Label: (null) 00:10:36.540 UUID: c910a23e-9e02-407b-b6d8-135bfaf2a4e5 00:10:36.540 Node size: 16384 00:10:36.540 Sector size: 4096 (CPU page size: 4096) 00:10:36.540 Filesystem size: 510.00MiB 00:10:36.540 Block group profiles: 00:10:36.540 Data: single 8.00MiB 00:10:36.540 Metadata: DUP 32.00MiB 00:10:36.540 System: DUP 8.00MiB 00:10:36.540 SSD detected: yes 00:10:36.540 Zoned device: no 00:10:36.540 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:36.540 Checksum: crc32c 00:10:36.540 Number of devices: 1 00:10:36.540 Devices: 00:10:36.540 ID SIZE PATH 00:10:36.540 1 510.00MiB /dev/nvme0n1p1 00:10:36.540 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:36.540 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1103646 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.112 00:10:37.112 real 0m1.354s 00:10:37.112 user 0m0.031s 00:10:37.112 sys 0m0.120s 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.112 ************************************ 00:10:37.112 END TEST filesystem_btrfs 00:10:37.112 ************************************ 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.112 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.112 ************************************ 00:10:37.112 START TEST filesystem_xfs 00:10:37.112 ************************************ 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:37.112 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:37.112 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:37.112 = sectsz=512 attr=2, projid32bit=1 00:10:37.112 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:37.112 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:37.112 data = bsize=4096 blocks=130560, imaxpct=25 00:10:37.112 = sunit=0 swidth=0 blks 00:10:37.112 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:37.112 log =internal log bsize=4096 blocks=16384, version=2 00:10:37.112 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:37.112 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:38.051 Discarding blocks...Done. 00:10:38.051 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:38.051 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1103646 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:39.962 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:39.962 00:10:39.962 real 0m2.964s 00:10:39.962 user 0m0.033s 00:10:39.962 sys 0m0.073s 00:10:39.962 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.962 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:39.962 ************************************ 00:10:39.962 END TEST filesystem_xfs 00:10:39.962 ************************************ 00:10:40.222 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1103646 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1103646 ']' 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1103646 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1103646 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1103646' 00:10:40.483 killing process with pid 1103646 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1103646 00:10:40.483 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1103646 00:10:40.744 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:40.744 00:10:40.744 real 0m20.894s 00:10:40.744 user 1m22.402s 00:10:40.744 sys 0m1.526s 00:10:40.744 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.744 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.744 ************************************ 00:10:40.744 END TEST nvmf_filesystem_no_in_capsule 00:10:40.744 ************************************ 00:10:40.744 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:40.744 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:40.744 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.744 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:41.004 ************************************ 00:10:41.004 START TEST nvmf_filesystem_in_capsule 00:10:41.004 ************************************ 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1107986 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1107986 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1107986 ']' 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.004 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.004 [2024-10-08 18:26:34.882009] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:10:41.004 [2024-10-08 18:26:34.882065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.004 [2024-10-08 18:26:34.969366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.004 [2024-10-08 18:26:35.030156] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.004 [2024-10-08 18:26:35.030189] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.004 [2024-10-08 18:26:35.030195] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.004 [2024-10-08 18:26:35.030200] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.004 [2024-10-08 18:26:35.030204] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.004 [2024-10-08 18:26:35.031760] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.004 [2024-10-08 18:26:35.031915] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.004 [2024-10-08 18:26:35.032062] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.004 [2024-10-08 18:26:35.032211] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.001 [2024-10-08 18:26:35.739263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.001 Malloc1 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.001 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.002 [2024-10-08 18:26:35.861721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:42.002 { 00:10:42.002 "name": "Malloc1", 00:10:42.002 "aliases": [ 00:10:42.002 "50faafcf-215b-45e4-8bf3-a325f8a75a90" 00:10:42.002 ], 00:10:42.002 "product_name": "Malloc disk", 00:10:42.002 "block_size": 512, 00:10:42.002 "num_blocks": 1048576, 00:10:42.002 "uuid": "50faafcf-215b-45e4-8bf3-a325f8a75a90", 00:10:42.002 "assigned_rate_limits": { 00:10:42.002 "rw_ios_per_sec": 0, 00:10:42.002 "rw_mbytes_per_sec": 0, 00:10:42.002 "r_mbytes_per_sec": 0, 00:10:42.002 "w_mbytes_per_sec": 0 00:10:42.002 }, 00:10:42.002 "claimed": true, 00:10:42.002 "claim_type": "exclusive_write", 00:10:42.002 "zoned": false, 00:10:42.002 "supported_io_types": { 00:10:42.002 "read": true, 00:10:42.002 "write": true, 00:10:42.002 "unmap": true, 00:10:42.002 "flush": true, 00:10:42.002 "reset": true, 00:10:42.002 "nvme_admin": false, 00:10:42.002 "nvme_io": false, 00:10:42.002 "nvme_io_md": false, 00:10:42.002 "write_zeroes": true, 00:10:42.002 "zcopy": true, 00:10:42.002 "get_zone_info": false, 00:10:42.002 "zone_management": false, 00:10:42.002 "zone_append": false, 00:10:42.002 "compare": false, 00:10:42.002 "compare_and_write": false, 00:10:42.002 "abort": true, 00:10:42.002 "seek_hole": false, 00:10:42.002 "seek_data": false, 00:10:42.002 "copy": true, 00:10:42.002 "nvme_iov_md": false 00:10:42.002 }, 00:10:42.002 "memory_domains": [ 00:10:42.002 { 00:10:42.002 "dma_device_id": "system", 00:10:42.002 "dma_device_type": 1 00:10:42.002 }, 00:10:42.002 { 00:10:42.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.002 "dma_device_type": 2 00:10:42.002 } 00:10:42.002 ], 00:10:42.002 "driver_specific": {} 00:10:42.002 } 00:10:42.002 ]' 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:42.002 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:43.912 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.912 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:43.912 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.912 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:43.912 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:45.822 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:45.823 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:45.823 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:45.823 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:46.083 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:47.026 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:47.026 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:47.026 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:47.026 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.026 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.026 ************************************ 00:10:47.026 START TEST filesystem_in_capsule_ext4 00:10:47.026 ************************************ 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:47.026 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:47.026 mke2fs 1.47.0 (5-Feb-2023) 00:10:47.026 Discarding device blocks: 0/522240 done 00:10:47.288 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:47.288 Filesystem UUID: d85c429e-2971-45c5-a5e3-052b77c57500 00:10:47.288 Superblock backups stored on blocks: 00:10:47.288 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:47.288 00:10:47.288 Allocating group tables: 0/64 done 00:10:47.288 Writing inode tables: 0/64 done 00:10:47.288 Creating journal (8192 blocks): done 00:10:47.288 Writing superblocks and filesystem accounting information: 0/64 done 00:10:47.288 00:10:47.288 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:47.288 18:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:53.872 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1107986 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:53.873 00:10:53.873 real 0m5.802s 00:10:53.873 user 0m0.020s 00:10:53.873 sys 0m0.085s 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:53.873 ************************************ 00:10:53.873 END TEST filesystem_in_capsule_ext4 00:10:53.873 ************************************ 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.873 ************************************ 00:10:53.873 START TEST filesystem_in_capsule_btrfs 00:10:53.873 ************************************ 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:53.873 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:53.873 btrfs-progs v6.8.1 00:10:53.873 See https://btrfs.readthedocs.io for more information. 00:10:53.873 00:10:53.873 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:53.873 NOTE: several default settings have changed in version 5.15, please make sure 00:10:53.873 this does not affect your deployments: 00:10:53.873 - DUP for metadata (-m dup) 00:10:53.873 - enabled no-holes (-O no-holes) 00:10:53.873 - enabled free-space-tree (-R free-space-tree) 00:10:53.873 00:10:53.873 Label: (null) 00:10:53.873 UUID: d36cfc7b-a59b-4a13-8ac2-843e4aacd68a 00:10:53.873 Node size: 16384 00:10:53.873 Sector size: 4096 (CPU page size: 4096) 00:10:53.873 Filesystem size: 510.00MiB 00:10:53.873 Block group profiles: 00:10:53.873 Data: single 8.00MiB 00:10:53.873 Metadata: DUP 32.00MiB 00:10:53.873 System: DUP 8.00MiB 00:10:53.873 SSD detected: yes 00:10:53.873 Zoned device: no 00:10:53.873 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:53.873 Checksum: crc32c 00:10:53.873 Number of devices: 1 00:10:53.873 Devices: 00:10:53.873 ID SIZE PATH 00:10:53.873 1 510.00MiB /dev/nvme0n1p1 00:10:53.873 00:10:53.873 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:53.873 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1107986 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:54.134 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:54.395 00:10:54.395 real 0m1.291s 00:10:54.395 user 0m0.032s 00:10:54.395 sys 0m0.119s 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:54.395 ************************************ 00:10:54.395 END TEST filesystem_in_capsule_btrfs 00:10:54.395 ************************************ 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.395 ************************************ 00:10:54.395 START TEST filesystem_in_capsule_xfs 00:10:54.395 ************************************ 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:54.395 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:54.655 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:54.655 = sectsz=512 attr=2, projid32bit=1 00:10:54.655 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:54.655 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:54.655 data = bsize=4096 blocks=130560, imaxpct=25 00:10:54.655 = sunit=0 swidth=0 blks 00:10:54.655 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:54.655 log =internal log bsize=4096 blocks=16384, version=2 00:10:54.655 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:54.655 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:55.595 Discarding blocks...Done. 00:10:55.595 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:55.595 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1107986 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.508 00:10:57.508 real 0m3.211s 00:10:57.508 user 0m0.028s 00:10:57.508 sys 0m0.079s 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:57.508 ************************************ 00:10:57.508 END TEST filesystem_in_capsule_xfs 00:10:57.508 ************************************ 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:57.508 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.768 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.768 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:57.768 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:57.768 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.768 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:57.768 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1107986 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1107986 ']' 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1107986 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1107986 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1107986' 00:10:57.769 killing process with pid 1107986 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1107986 00:10:57.769 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1107986 00:10:58.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:58.031 00:10:58.031 real 0m17.159s 00:10:58.031 user 1m7.707s 00:10:58.031 sys 0m1.393s 00:10:58.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.031 ************************************ 00:10:58.031 END TEST nvmf_filesystem_in_capsule 00:10:58.031 ************************************ 00:10:58.031 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:58.031 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:58.031 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:58.031 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.031 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:58.031 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.031 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.031 rmmod nvme_tcp 00:10:58.031 rmmod nvme_fabrics 00:10:58.031 rmmod nvme_keyring 00:10:58.291 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.291 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:58.291 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:58.291 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.292 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.205 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.205 00:11:00.205 real 0m48.578s 00:11:00.205 user 2m32.503s 00:11:00.205 sys 0m8.980s 00:11:00.205 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.205 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.205 ************************************ 00:11:00.205 END TEST nvmf_filesystem 00:11:00.205 ************************************ 00:11:00.205 18:26:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:00.205 18:26:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:00.205 18:26:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.205 18:26:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.467 ************************************ 00:11:00.467 START TEST nvmf_target_discovery 00:11:00.467 ************************************ 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:00.467 * Looking for test storage... 00:11:00.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:00.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.467 --rc genhtml_branch_coverage=1 00:11:00.467 --rc genhtml_function_coverage=1 00:11:00.467 --rc genhtml_legend=1 00:11:00.467 --rc geninfo_all_blocks=1 00:11:00.467 --rc geninfo_unexecuted_blocks=1 00:11:00.467 00:11:00.467 ' 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:00.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.467 --rc genhtml_branch_coverage=1 00:11:00.467 --rc genhtml_function_coverage=1 00:11:00.467 --rc genhtml_legend=1 00:11:00.467 --rc geninfo_all_blocks=1 00:11:00.467 --rc geninfo_unexecuted_blocks=1 00:11:00.467 00:11:00.467 ' 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:00.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.467 --rc genhtml_branch_coverage=1 00:11:00.467 --rc genhtml_function_coverage=1 00:11:00.467 --rc genhtml_legend=1 00:11:00.467 --rc geninfo_all_blocks=1 00:11:00.467 --rc geninfo_unexecuted_blocks=1 00:11:00.467 00:11:00.467 ' 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:00.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.467 --rc genhtml_branch_coverage=1 00:11:00.467 --rc genhtml_function_coverage=1 00:11:00.467 --rc genhtml_legend=1 00:11:00.467 --rc geninfo_all_blocks=1 00:11:00.467 --rc geninfo_unexecuted_blocks=1 00:11:00.467 00:11:00.467 ' 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.467 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.468 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.866 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:08.867 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:08.867 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:08.867 Found net devices under 0000:31:00.0: cvl_0_0 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:08.867 Found net devices under 0000:31:00.1: cvl_0_1 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.867 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:08.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:11:08.867 00:11:08.867 --- 10.0.0.2 ping statistics --- 00:11:08.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.867 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:11:08.867 00:11:08.867 --- 10.0.0.1 ping statistics --- 00:11:08.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.867 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1115771 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1115771 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1115771 ']' 00:11:08.867 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.868 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.868 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.868 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.868 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.868 [2024-10-08 18:27:02.269782] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:11:08.868 [2024-10-08 18:27:02.269850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.868 [2024-10-08 18:27:02.361959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.868 [2024-10-08 18:27:02.459556] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.868 [2024-10-08 18:27:02.459619] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.868 [2024-10-08 18:27:02.459628] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.868 [2024-10-08 18:27:02.459636] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.868 [2024-10-08 18:27:02.459642] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.868 [2024-10-08 18:27:02.461788] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.868 [2024-10-08 18:27:02.461948] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.868 [2024-10-08 18:27:02.462106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.868 [2024-10-08 18:27:02.462268] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.129 [2024-10-08 18:27:03.145134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.129 Null1 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.129 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.389 [2024-10-08 18:27:03.205643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.389 Null2 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:09.389 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 Null3 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 Null4 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:11:09.651 00:11:09.651 Discovery Log Number of Records 6, Generation counter 6 00:11:09.651 =====Discovery Log Entry 0====== 00:11:09.651 trtype: tcp 00:11:09.651 adrfam: ipv4 00:11:09.651 subtype: current discovery subsystem 00:11:09.651 treq: not required 00:11:09.651 portid: 0 00:11:09.651 trsvcid: 4420 00:11:09.651 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:09.651 traddr: 10.0.0.2 00:11:09.651 eflags: explicit discovery connections, duplicate discovery information 00:11:09.651 sectype: none 00:11:09.651 =====Discovery Log Entry 1====== 00:11:09.651 trtype: tcp 00:11:09.651 adrfam: ipv4 00:11:09.651 subtype: nvme subsystem 00:11:09.651 treq: not required 00:11:09.651 portid: 0 00:11:09.651 trsvcid: 4420 00:11:09.651 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:09.651 traddr: 10.0.0.2 00:11:09.651 eflags: none 00:11:09.651 sectype: none 00:11:09.651 =====Discovery Log Entry 2====== 00:11:09.651 trtype: tcp 00:11:09.651 adrfam: ipv4 00:11:09.651 subtype: nvme subsystem 00:11:09.651 treq: not required 00:11:09.651 portid: 0 00:11:09.651 trsvcid: 4420 00:11:09.651 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:09.651 traddr: 10.0.0.2 00:11:09.651 eflags: none 00:11:09.651 sectype: none 00:11:09.651 =====Discovery Log Entry 3====== 00:11:09.651 trtype: tcp 00:11:09.651 adrfam: ipv4 00:11:09.651 subtype: nvme subsystem 00:11:09.651 treq: not required 00:11:09.651 portid: 0 00:11:09.651 trsvcid: 4420 00:11:09.651 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:09.651 traddr: 10.0.0.2 00:11:09.651 eflags: none 00:11:09.651 sectype: none 00:11:09.651 =====Discovery Log Entry 4====== 00:11:09.651 trtype: tcp 00:11:09.651 adrfam: ipv4 00:11:09.651 subtype: nvme subsystem 00:11:09.651 treq: not required 00:11:09.651 portid: 0 00:11:09.651 trsvcid: 4420 00:11:09.651 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:09.651 traddr: 10.0.0.2 00:11:09.651 eflags: none 00:11:09.651 sectype: none 00:11:09.651 =====Discovery Log Entry 5====== 00:11:09.651 trtype: tcp 00:11:09.651 adrfam: ipv4 00:11:09.651 subtype: discovery subsystem referral 00:11:09.651 treq: not required 00:11:09.651 portid: 0 00:11:09.651 trsvcid: 4430 00:11:09.651 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:09.651 traddr: 10.0.0.2 00:11:09.651 eflags: none 00:11:09.651 sectype: none 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:09.651 Perform nvmf subsystem discovery via RPC 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.651 [ 00:11:09.651 { 00:11:09.651 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:09.651 "subtype": "Discovery", 00:11:09.651 "listen_addresses": [ 00:11:09.651 { 00:11:09.651 "trtype": "TCP", 00:11:09.651 "adrfam": "IPv4", 00:11:09.651 "traddr": "10.0.0.2", 00:11:09.651 "trsvcid": "4420" 00:11:09.651 } 00:11:09.651 ], 00:11:09.651 "allow_any_host": true, 00:11:09.651 "hosts": [] 00:11:09.651 }, 00:11:09.651 { 00:11:09.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.651 "subtype": "NVMe", 00:11:09.651 "listen_addresses": [ 00:11:09.651 { 00:11:09.651 "trtype": "TCP", 00:11:09.651 "adrfam": "IPv4", 00:11:09.651 "traddr": "10.0.0.2", 00:11:09.651 "trsvcid": "4420" 00:11:09.651 } 00:11:09.651 ], 00:11:09.651 "allow_any_host": true, 00:11:09.651 "hosts": [], 00:11:09.651 "serial_number": "SPDK00000000000001", 00:11:09.651 "model_number": "SPDK bdev Controller", 00:11:09.651 "max_namespaces": 32, 00:11:09.651 "min_cntlid": 1, 00:11:09.651 "max_cntlid": 65519, 00:11:09.651 "namespaces": [ 00:11:09.651 { 00:11:09.651 "nsid": 1, 00:11:09.651 "bdev_name": "Null1", 00:11:09.651 "name": "Null1", 00:11:09.651 "nguid": "E680E2AB2F924A95B004E1BDDF47FAC4", 00:11:09.651 "uuid": "e680e2ab-2f92-4a95-b004-e1bddf47fac4" 00:11:09.651 } 00:11:09.651 ] 00:11:09.651 }, 00:11:09.651 { 00:11:09.651 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:09.651 "subtype": "NVMe", 00:11:09.651 "listen_addresses": [ 00:11:09.651 { 00:11:09.651 "trtype": "TCP", 00:11:09.651 "adrfam": "IPv4", 00:11:09.651 "traddr": "10.0.0.2", 00:11:09.651 "trsvcid": "4420" 00:11:09.651 } 00:11:09.651 ], 00:11:09.651 "allow_any_host": true, 00:11:09.651 "hosts": [], 00:11:09.651 "serial_number": "SPDK00000000000002", 00:11:09.651 "model_number": "SPDK bdev Controller", 00:11:09.651 "max_namespaces": 32, 00:11:09.651 "min_cntlid": 1, 00:11:09.651 "max_cntlid": 65519, 00:11:09.651 "namespaces": [ 00:11:09.651 { 00:11:09.651 "nsid": 1, 00:11:09.651 "bdev_name": "Null2", 00:11:09.651 "name": "Null2", 00:11:09.651 "nguid": "80D9EA42BC0A45458EF2851C42CE7DE2", 00:11:09.651 "uuid": "80d9ea42-bc0a-4545-8ef2-851c42ce7de2" 00:11:09.651 } 00:11:09.651 ] 00:11:09.651 }, 00:11:09.651 { 00:11:09.651 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:09.651 "subtype": "NVMe", 00:11:09.651 "listen_addresses": [ 00:11:09.651 { 00:11:09.651 "trtype": "TCP", 00:11:09.651 "adrfam": "IPv4", 00:11:09.651 "traddr": "10.0.0.2", 00:11:09.651 "trsvcid": "4420" 00:11:09.651 } 00:11:09.651 ], 00:11:09.651 "allow_any_host": true, 00:11:09.651 "hosts": [], 00:11:09.651 "serial_number": "SPDK00000000000003", 00:11:09.651 "model_number": "SPDK bdev Controller", 00:11:09.651 "max_namespaces": 32, 00:11:09.651 "min_cntlid": 1, 00:11:09.651 "max_cntlid": 65519, 00:11:09.651 "namespaces": [ 00:11:09.651 { 00:11:09.651 "nsid": 1, 00:11:09.651 "bdev_name": "Null3", 00:11:09.651 "name": "Null3", 00:11:09.651 "nguid": "71F5BE6E66954456BCA87DF9E926C074", 00:11:09.651 "uuid": "71f5be6e-6695-4456-bca8-7df9e926c074" 00:11:09.651 } 00:11:09.651 ] 00:11:09.651 }, 00:11:09.651 { 00:11:09.651 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:09.651 "subtype": "NVMe", 00:11:09.651 "listen_addresses": [ 00:11:09.651 { 00:11:09.651 "trtype": "TCP", 00:11:09.651 "adrfam": "IPv4", 00:11:09.651 "traddr": "10.0.0.2", 00:11:09.651 "trsvcid": "4420" 00:11:09.651 } 00:11:09.651 ], 00:11:09.651 "allow_any_host": true, 00:11:09.651 "hosts": [], 00:11:09.651 "serial_number": "SPDK00000000000004", 00:11:09.651 "model_number": "SPDK bdev Controller", 00:11:09.651 "max_namespaces": 32, 00:11:09.651 "min_cntlid": 1, 00:11:09.651 "max_cntlid": 65519, 00:11:09.651 "namespaces": [ 00:11:09.651 { 00:11:09.651 "nsid": 1, 00:11:09.651 "bdev_name": "Null4", 00:11:09.651 "name": "Null4", 00:11:09.651 "nguid": "0924FEEA4B7941BAA2B9A99E27B7BBC1", 00:11:09.651 "uuid": "0924feea-4b79-41ba-a2b9-a99e27b7bbc1" 00:11:09.651 } 00:11:09.651 ] 00:11:09.651 } 00:11:09.651 ] 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.651 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.652 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.911 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:09.911 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:09.911 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:09.911 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:09.911 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:09.911 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:09.911 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.911 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:09.911 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:09.912 rmmod nvme_tcp 00:11:09.912 rmmod nvme_fabrics 00:11:09.912 rmmod nvme_keyring 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1115771 ']' 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1115771 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1115771 ']' 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1115771 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1115771 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1115771' 00:11:09.912 killing process with pid 1115771 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1115771 00:11:09.912 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1115771 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.172 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.079 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.079 00:11:12.079 real 0m11.847s 00:11:12.079 user 0m8.653s 00:11:12.079 sys 0m6.261s 00:11:12.079 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.079 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.079 ************************************ 00:11:12.079 END TEST nvmf_target_discovery 00:11:12.079 ************************************ 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.340 ************************************ 00:11:12.340 START TEST nvmf_referrals 00:11:12.340 ************************************ 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:12.340 * Looking for test storage... 00:11:12.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.340 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.601 --rc genhtml_branch_coverage=1 00:11:12.601 --rc genhtml_function_coverage=1 00:11:12.601 --rc genhtml_legend=1 00:11:12.601 --rc geninfo_all_blocks=1 00:11:12.601 --rc geninfo_unexecuted_blocks=1 00:11:12.601 00:11:12.601 ' 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.601 --rc genhtml_branch_coverage=1 00:11:12.601 --rc genhtml_function_coverage=1 00:11:12.601 --rc genhtml_legend=1 00:11:12.601 --rc geninfo_all_blocks=1 00:11:12.601 --rc geninfo_unexecuted_blocks=1 00:11:12.601 00:11:12.601 ' 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.601 --rc genhtml_branch_coverage=1 00:11:12.601 --rc genhtml_function_coverage=1 00:11:12.601 --rc genhtml_legend=1 00:11:12.601 --rc geninfo_all_blocks=1 00:11:12.601 --rc geninfo_unexecuted_blocks=1 00:11:12.601 00:11:12.601 ' 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.601 --rc genhtml_branch_coverage=1 00:11:12.601 --rc genhtml_function_coverage=1 00:11:12.601 --rc genhtml_legend=1 00:11:12.601 --rc geninfo_all_blocks=1 00:11:12.601 --rc geninfo_unexecuted_blocks=1 00:11:12.601 00:11:12.601 ' 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.601 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.602 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:20.739 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:20.739 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:20.739 Found net devices under 0000:31:00.0: cvl_0_0 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:20.739 Found net devices under 0000:31:00.1: cvl_0_1 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:20.739 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:20.740 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:20.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:11:20.740 00:11:20.740 --- 10.0.0.2 ping statistics --- 00:11:20.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.740 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:11:20.740 00:11:20.740 --- 10.0.0.1 ping statistics --- 00:11:20.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.740 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1120968 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1120968 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1120968 ']' 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:20.740 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.740 [2024-10-08 18:27:14.208050] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:11:20.740 [2024-10-08 18:27:14.208116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.740 [2024-10-08 18:27:14.296499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.740 [2024-10-08 18:27:14.393650] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.740 [2024-10-08 18:27:14.393704] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.740 [2024-10-08 18:27:14.393713] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.740 [2024-10-08 18:27:14.393720] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.740 [2024-10-08 18:27:14.393726] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.740 [2024-10-08 18:27:14.395688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.740 [2024-10-08 18:27:14.395851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.740 [2024-10-08 18:27:14.396025] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.740 [2024-10-08 18:27:14.396066] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.001 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.001 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:21.001 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:21.001 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.001 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.263 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.263 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.263 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.263 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.263 [2024-10-08 18:27:15.088880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.263 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.263 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:21.263 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.263 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.263 [2024-10-08 18:27:15.105204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:21.263 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:21.264 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:21.525 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:21.525 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:21.525 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:21.525 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.525 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.525 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:21.526 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:21.787 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:22.048 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:22.048 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:22.048 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:22.048 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:22.048 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:22.048 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:22.048 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:22.308 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:22.568 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:22.828 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:22.828 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:22.828 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:22.828 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:22.828 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:22.828 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:23.088 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:23.088 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:23.088 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.088 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.088 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.088 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:23.088 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:23.088 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.088 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.088 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.089 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:23.089 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:23.089 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:23.089 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:23.089 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:23.089 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:23.089 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.349 rmmod nvme_tcp 00:11:23.349 rmmod nvme_fabrics 00:11:23.349 rmmod nvme_keyring 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1120968 ']' 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1120968 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1120968 ']' 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1120968 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1120968 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1120968' 00:11:23.349 killing process with pid 1120968 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1120968 00:11:23.349 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1120968 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.609 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.521 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:25.521 00:11:25.521 real 0m13.337s 00:11:25.521 user 0m15.358s 00:11:25.521 sys 0m6.655s 00:11:25.521 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.521 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.521 ************************************ 00:11:25.521 END TEST nvmf_referrals 00:11:25.521 ************************************ 00:11:25.521 18:27:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:25.521 18:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.521 18:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.521 18:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.782 ************************************ 00:11:25.782 START TEST nvmf_connect_disconnect 00:11:25.782 ************************************ 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:25.782 * Looking for test storage... 00:11:25.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:25.782 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:25.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.783 --rc genhtml_branch_coverage=1 00:11:25.783 --rc genhtml_function_coverage=1 00:11:25.783 --rc genhtml_legend=1 00:11:25.783 --rc geninfo_all_blocks=1 00:11:25.783 --rc geninfo_unexecuted_blocks=1 00:11:25.783 00:11:25.783 ' 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:25.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.783 --rc genhtml_branch_coverage=1 00:11:25.783 --rc genhtml_function_coverage=1 00:11:25.783 --rc genhtml_legend=1 00:11:25.783 --rc geninfo_all_blocks=1 00:11:25.783 --rc geninfo_unexecuted_blocks=1 00:11:25.783 00:11:25.783 ' 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:25.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.783 --rc genhtml_branch_coverage=1 00:11:25.783 --rc genhtml_function_coverage=1 00:11:25.783 --rc genhtml_legend=1 00:11:25.783 --rc geninfo_all_blocks=1 00:11:25.783 --rc geninfo_unexecuted_blocks=1 00:11:25.783 00:11:25.783 ' 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:25.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.783 --rc genhtml_branch_coverage=1 00:11:25.783 --rc genhtml_function_coverage=1 00:11:25.783 --rc genhtml_legend=1 00:11:25.783 --rc geninfo_all_blocks=1 00:11:25.783 --rc geninfo_unexecuted_blocks=1 00:11:25.783 00:11:25.783 ' 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.783 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.044 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:34.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:34.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:34.184 Found net devices under 0000:31:00.0: cvl_0_0 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.184 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:34.185 Found net devices under 0000:31:00.1: cvl_0_1 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:34.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:11:34.185 00:11:34.185 --- 10.0.0.2 ping statistics --- 00:11:34.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.185 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:11:34.185 00:11:34.185 --- 10.0.0.1 ping statistics --- 00:11:34.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.185 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1125972 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1125972 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1125972 ']' 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.185 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.185 [2024-10-08 18:27:27.677068] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:11:34.185 [2024-10-08 18:27:27.677139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.185 [2024-10-08 18:27:27.751623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.185 [2024-10-08 18:27:27.848106] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.185 [2024-10-08 18:27:27.848159] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.185 [2024-10-08 18:27:27.848168] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.185 [2024-10-08 18:27:27.848175] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.185 [2024-10-08 18:27:27.848181] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.185 [2024-10-08 18:27:27.850177] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.185 [2024-10-08 18:27:27.850339] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.185 [2024-10-08 18:27:27.850497] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.185 [2024-10-08 18:27:27.850499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.757 [2024-10-08 18:27:28.557480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.757 [2024-10-08 18:27:28.627218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:34.757 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:38.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.067 rmmod nvme_tcp 00:11:53.067 rmmod nvme_fabrics 00:11:53.067 rmmod nvme_keyring 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1125972 ']' 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1125972 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1125972 ']' 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1125972 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1125972 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1125972' 00:11:53.067 killing process with pid 1125972 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1125972 00:11:53.067 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1125972 00:11:53.067 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:53.067 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:53.067 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:53.067 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:53.067 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:11:53.067 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:53.067 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:11:53.327 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.327 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.327 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.327 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.327 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.239 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.239 00:11:55.239 real 0m29.586s 00:11:55.239 user 1m18.865s 00:11:55.239 sys 0m7.405s 00:11:55.239 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.239 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:55.239 ************************************ 00:11:55.239 END TEST nvmf_connect_disconnect 00:11:55.239 ************************************ 00:11:55.239 18:27:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:55.239 18:27:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:55.239 18:27:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.239 18:27:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.239 ************************************ 00:11:55.239 START TEST nvmf_multitarget 00:11:55.239 ************************************ 00:11:55.239 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:55.501 * Looking for test storage... 00:11:55.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.501 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:55.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.501 --rc genhtml_branch_coverage=1 00:11:55.501 --rc genhtml_function_coverage=1 00:11:55.501 --rc genhtml_legend=1 00:11:55.501 --rc geninfo_all_blocks=1 00:11:55.501 --rc geninfo_unexecuted_blocks=1 00:11:55.501 00:11:55.501 ' 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:55.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.502 --rc genhtml_branch_coverage=1 00:11:55.502 --rc genhtml_function_coverage=1 00:11:55.502 --rc genhtml_legend=1 00:11:55.502 --rc geninfo_all_blocks=1 00:11:55.502 --rc geninfo_unexecuted_blocks=1 00:11:55.502 00:11:55.502 ' 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:55.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.502 --rc genhtml_branch_coverage=1 00:11:55.502 --rc genhtml_function_coverage=1 00:11:55.502 --rc genhtml_legend=1 00:11:55.502 --rc geninfo_all_blocks=1 00:11:55.502 --rc geninfo_unexecuted_blocks=1 00:11:55.502 00:11:55.502 ' 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:55.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.502 --rc genhtml_branch_coverage=1 00:11:55.502 --rc genhtml_function_coverage=1 00:11:55.502 --rc genhtml_legend=1 00:11:55.502 --rc geninfo_all_blocks=1 00:11:55.502 --rc geninfo_unexecuted_blocks=1 00:11:55.502 00:11:55.502 ' 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.502 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.503 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:55.503 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:55.503 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.503 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:03.643 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:03.643 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:03.643 Found net devices under 0000:31:00.0: cvl_0_0 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.643 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:03.644 Found net devices under 0000:31:00.1: cvl_0_1 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.644 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:12:03.644 00:12:03.644 --- 10.0.0.2 ping statistics --- 00:12:03.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.644 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:12:03.644 00:12:03.644 --- 10.0.0.1 ping statistics --- 00:12:03.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.644 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1134019 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1134019 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1134019 ']' 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.644 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:03.644 [2024-10-08 18:27:57.215918] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:12:03.644 [2024-10-08 18:27:57.215997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.644 [2024-10-08 18:27:57.306346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.644 [2024-10-08 18:27:57.402491] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.644 [2024-10-08 18:27:57.402553] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.644 [2024-10-08 18:27:57.402561] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.644 [2024-10-08 18:27:57.402568] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.644 [2024-10-08 18:27:57.402580] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.644 [2024-10-08 18:27:57.404729] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.644 [2024-10-08 18:27:57.404891] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.644 [2024-10-08 18:27:57.405039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.644 [2024-10-08 18:27:57.405040] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:04.215 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:04.480 "nvmf_tgt_1" 00:12:04.480 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:04.480 "nvmf_tgt_2" 00:12:04.480 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.480 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:04.480 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:04.480 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:04.743 true 00:12:04.743 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:04.743 true 00:12:04.743 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.743 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.003 rmmod nvme_tcp 00:12:05.003 rmmod nvme_fabrics 00:12:05.003 rmmod nvme_keyring 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1134019 ']' 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1134019 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1134019 ']' 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1134019 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:05.003 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1134019 00:12:05.003 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:05.003 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:05.003 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1134019' 00:12:05.003 killing process with pid 1134019 00:12:05.003 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1134019 00:12:05.003 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1134019 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.264 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.178 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.440 00:12:07.440 real 0m11.960s 00:12:07.440 user 0m9.941s 00:12:07.440 sys 0m6.303s 00:12:07.440 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.440 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.440 ************************************ 00:12:07.440 END TEST nvmf_multitarget 00:12:07.440 ************************************ 00:12:07.440 18:28:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:07.440 18:28:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.440 18:28:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.440 18:28:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.440 ************************************ 00:12:07.440 START TEST nvmf_rpc 00:12:07.440 ************************************ 00:12:07.440 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:07.440 * Looking for test storage... 00:12:07.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.440 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:07.440 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:12:07.440 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:07.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.702 --rc genhtml_branch_coverage=1 00:12:07.702 --rc genhtml_function_coverage=1 00:12:07.702 --rc genhtml_legend=1 00:12:07.702 --rc geninfo_all_blocks=1 00:12:07.702 --rc geninfo_unexecuted_blocks=1 00:12:07.702 00:12:07.702 ' 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:07.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.702 --rc genhtml_branch_coverage=1 00:12:07.702 --rc genhtml_function_coverage=1 00:12:07.702 --rc genhtml_legend=1 00:12:07.702 --rc geninfo_all_blocks=1 00:12:07.702 --rc geninfo_unexecuted_blocks=1 00:12:07.702 00:12:07.702 ' 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:07.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.702 --rc genhtml_branch_coverage=1 00:12:07.702 --rc genhtml_function_coverage=1 00:12:07.702 --rc genhtml_legend=1 00:12:07.702 --rc geninfo_all_blocks=1 00:12:07.702 --rc geninfo_unexecuted_blocks=1 00:12:07.702 00:12:07.702 ' 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:07.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.702 --rc genhtml_branch_coverage=1 00:12:07.702 --rc genhtml_function_coverage=1 00:12:07.702 --rc genhtml_legend=1 00:12:07.702 --rc geninfo_all_blocks=1 00:12:07.702 --rc geninfo_unexecuted_blocks=1 00:12:07.702 00:12:07.702 ' 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.702 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.703 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:15.845 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.845 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:15.846 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:15.846 Found net devices under 0000:31:00.0: cvl_0_0 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:15.846 Found net devices under 0000:31:00.1: cvl_0_1 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.846 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:12:15.846 00:12:15.846 --- 10.0.0.2 ping statistics --- 00:12:15.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.846 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:12:15.846 00:12:15.846 --- 10.0.0.1 ping statistics --- 00:12:15.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.846 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1138776 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1138776 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1138776 ']' 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.846 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.846 [2024-10-08 18:28:09.343505] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:12:15.846 [2024-10-08 18:28:09.343574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.846 [2024-10-08 18:28:09.433426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.846 [2024-10-08 18:28:09.528653] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.846 [2024-10-08 18:28:09.528715] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.846 [2024-10-08 18:28:09.528724] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.846 [2024-10-08 18:28:09.528731] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.846 [2024-10-08 18:28:09.528737] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.846 [2024-10-08 18:28:09.530799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.846 [2024-10-08 18:28:09.530963] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.846 [2024-10-08 18:28:09.531122] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.846 [2024-10-08 18:28:09.531260] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.109 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.109 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:16.109 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:16.109 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:16.109 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:16.370 "tick_rate": 2400000000, 00:12:16.370 "poll_groups": [ 00:12:16.370 { 00:12:16.370 "name": "nvmf_tgt_poll_group_000", 00:12:16.370 "admin_qpairs": 0, 00:12:16.370 "io_qpairs": 0, 00:12:16.370 "current_admin_qpairs": 0, 00:12:16.370 "current_io_qpairs": 0, 00:12:16.370 "pending_bdev_io": 0, 00:12:16.370 "completed_nvme_io": 0, 00:12:16.370 "transports": [] 00:12:16.370 }, 00:12:16.370 { 00:12:16.370 "name": "nvmf_tgt_poll_group_001", 00:12:16.370 "admin_qpairs": 0, 00:12:16.370 "io_qpairs": 0, 00:12:16.370 "current_admin_qpairs": 0, 00:12:16.370 "current_io_qpairs": 0, 00:12:16.370 "pending_bdev_io": 0, 00:12:16.370 "completed_nvme_io": 0, 00:12:16.370 "transports": [] 00:12:16.370 }, 00:12:16.370 { 00:12:16.370 "name": "nvmf_tgt_poll_group_002", 00:12:16.370 "admin_qpairs": 0, 00:12:16.370 "io_qpairs": 0, 00:12:16.370 "current_admin_qpairs": 0, 00:12:16.370 "current_io_qpairs": 0, 00:12:16.370 "pending_bdev_io": 0, 00:12:16.370 "completed_nvme_io": 0, 00:12:16.370 "transports": [] 00:12:16.370 }, 00:12:16.370 { 00:12:16.370 "name": "nvmf_tgt_poll_group_003", 00:12:16.370 "admin_qpairs": 0, 00:12:16.370 "io_qpairs": 0, 00:12:16.370 "current_admin_qpairs": 0, 00:12:16.370 "current_io_qpairs": 0, 00:12:16.370 "pending_bdev_io": 0, 00:12:16.370 "completed_nvme_io": 0, 00:12:16.370 "transports": [] 00:12:16.370 } 00:12:16.370 ] 00:12:16.370 }' 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.370 [2024-10-08 18:28:10.327373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.370 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:16.370 "tick_rate": 2400000000, 00:12:16.370 "poll_groups": [ 00:12:16.370 { 00:12:16.370 "name": "nvmf_tgt_poll_group_000", 00:12:16.370 "admin_qpairs": 0, 00:12:16.370 "io_qpairs": 0, 00:12:16.370 "current_admin_qpairs": 0, 00:12:16.370 "current_io_qpairs": 0, 00:12:16.370 "pending_bdev_io": 0, 00:12:16.370 "completed_nvme_io": 0, 00:12:16.370 "transports": [ 00:12:16.370 { 00:12:16.370 "trtype": "TCP" 00:12:16.370 } 00:12:16.370 ] 00:12:16.370 }, 00:12:16.370 { 00:12:16.370 "name": "nvmf_tgt_poll_group_001", 00:12:16.370 "admin_qpairs": 0, 00:12:16.370 "io_qpairs": 0, 00:12:16.370 "current_admin_qpairs": 0, 00:12:16.370 "current_io_qpairs": 0, 00:12:16.370 "pending_bdev_io": 0, 00:12:16.370 "completed_nvme_io": 0, 00:12:16.370 "transports": [ 00:12:16.370 { 00:12:16.370 "trtype": "TCP" 00:12:16.370 } 00:12:16.370 ] 00:12:16.370 }, 00:12:16.370 { 00:12:16.370 "name": "nvmf_tgt_poll_group_002", 00:12:16.370 "admin_qpairs": 0, 00:12:16.370 "io_qpairs": 0, 00:12:16.370 "current_admin_qpairs": 0, 00:12:16.370 "current_io_qpairs": 0, 00:12:16.370 "pending_bdev_io": 0, 00:12:16.370 "completed_nvme_io": 0, 00:12:16.370 "transports": [ 00:12:16.370 { 00:12:16.370 "trtype": "TCP" 00:12:16.370 } 00:12:16.370 ] 00:12:16.370 }, 00:12:16.370 { 00:12:16.370 "name": "nvmf_tgt_poll_group_003", 00:12:16.370 "admin_qpairs": 0, 00:12:16.370 "io_qpairs": 0, 00:12:16.370 "current_admin_qpairs": 0, 00:12:16.370 "current_io_qpairs": 0, 00:12:16.370 "pending_bdev_io": 0, 00:12:16.371 "completed_nvme_io": 0, 00:12:16.371 "transports": [ 00:12:16.371 { 00:12:16.371 "trtype": "TCP" 00:12:16.371 } 00:12:16.371 ] 00:12:16.371 } 00:12:16.371 ] 00:12:16.371 }' 00:12:16.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:16.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:16.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:16.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:16.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:16.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:16.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:16.371 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.632 Malloc1 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.632 [2024-10-08 18:28:10.521562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:16.632 [2024-10-08 18:28:10.558628] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:16.632 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:16.632 could not add new controller: failed to write to nvme-fabrics device 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.632 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.548 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.548 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:18.548 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.548 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:18.548 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.462 [2024-10-08 18:28:14.302799] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:20.462 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:20.462 could not add new controller: failed to write to nvme-fabrics device 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.462 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.843 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.843 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:21.843 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.843 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:21.843 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:24.384 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:24.384 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:24.384 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.384 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:24.384 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.384 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:24.384 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.385 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.385 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:24.385 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:24.385 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.385 [2024-10-08 18:28:18.048081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.385 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.767 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.767 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:25.767 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.767 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:25.767 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.679 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.939 [2024-10-08 18:28:21.772290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.939 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.322 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.322 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:29.322 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.322 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:29.322 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.866 [2024-10-08 18:28:25.521565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.866 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.250 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.250 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.250 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.250 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:33.250 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:35.163 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:35.163 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:35.163 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.163 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:35.163 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.163 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:35.163 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.163 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.163 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 [2024-10-08 18:28:29.292606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.424 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.337 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.337 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.337 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.337 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:37.337 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:39.249 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 [2024-10-08 18:28:33.045778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.633 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.633 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.633 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.633 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:40.633 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:42.560 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:42.560 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:42.560 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.560 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:42.560 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.560 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:42.560 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.839 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.839 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:42.839 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:42.839 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.839 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 [2024-10-08 18:28:36.771671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 [2024-10-08 18:28:36.835801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.840 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.162 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.162 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.162 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.162 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.162 [2024-10-08 18:28:36.899994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.162 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 [2024-10-08 18:28:36.972213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 [2024-10-08 18:28:37.040414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:43.163 "tick_rate": 2400000000, 00:12:43.163 "poll_groups": [ 00:12:43.163 { 00:12:43.163 "name": "nvmf_tgt_poll_group_000", 00:12:43.163 "admin_qpairs": 0, 00:12:43.163 "io_qpairs": 224, 00:12:43.163 "current_admin_qpairs": 0, 00:12:43.163 "current_io_qpairs": 0, 00:12:43.163 "pending_bdev_io": 0, 00:12:43.163 "completed_nvme_io": 519, 00:12:43.163 "transports": [ 00:12:43.163 { 00:12:43.163 "trtype": "TCP" 00:12:43.163 } 00:12:43.163 ] 00:12:43.163 }, 00:12:43.163 { 00:12:43.163 "name": "nvmf_tgt_poll_group_001", 00:12:43.163 "admin_qpairs": 1, 00:12:43.163 "io_qpairs": 223, 00:12:43.163 "current_admin_qpairs": 0, 00:12:43.163 "current_io_qpairs": 0, 00:12:43.163 "pending_bdev_io": 0, 00:12:43.163 "completed_nvme_io": 223, 00:12:43.163 "transports": [ 00:12:43.163 { 00:12:43.163 "trtype": "TCP" 00:12:43.163 } 00:12:43.163 ] 00:12:43.163 }, 00:12:43.163 { 00:12:43.163 "name": "nvmf_tgt_poll_group_002", 00:12:43.163 "admin_qpairs": 6, 00:12:43.163 "io_qpairs": 218, 00:12:43.163 "current_admin_qpairs": 0, 00:12:43.163 "current_io_qpairs": 0, 00:12:43.163 "pending_bdev_io": 0, 00:12:43.163 "completed_nvme_io": 271, 00:12:43.163 "transports": [ 00:12:43.163 { 00:12:43.163 "trtype": "TCP" 00:12:43.163 } 00:12:43.163 ] 00:12:43.163 }, 00:12:43.163 { 00:12:43.163 "name": "nvmf_tgt_poll_group_003", 00:12:43.163 "admin_qpairs": 0, 00:12:43.163 "io_qpairs": 224, 00:12:43.163 "current_admin_qpairs": 0, 00:12:43.163 "current_io_qpairs": 0, 00:12:43.163 "pending_bdev_io": 0, 00:12:43.163 "completed_nvme_io": 226, 00:12:43.163 "transports": [ 00:12:43.163 { 00:12:43.163 "trtype": "TCP" 00:12:43.163 } 00:12:43.163 ] 00:12:43.163 } 00:12:43.163 ] 00:12:43.163 }' 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:43.163 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.441 rmmod nvme_tcp 00:12:43.441 rmmod nvme_fabrics 00:12:43.441 rmmod nvme_keyring 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1138776 ']' 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1138776 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1138776 ']' 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1138776 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1138776 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1138776' 00:12:43.441 killing process with pid 1138776 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1138776 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1138776 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.441 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.986 00:12:45.986 real 0m38.226s 00:12:45.986 user 1m53.714s 00:12:45.986 sys 0m8.088s 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.986 ************************************ 00:12:45.986 END TEST nvmf_rpc 00:12:45.986 ************************************ 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.986 ************************************ 00:12:45.986 START TEST nvmf_invalid 00:12:45.986 ************************************ 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:45.986 * Looking for test storage... 00:12:45.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:45.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.986 --rc genhtml_branch_coverage=1 00:12:45.986 --rc genhtml_function_coverage=1 00:12:45.986 --rc genhtml_legend=1 00:12:45.986 --rc geninfo_all_blocks=1 00:12:45.986 --rc geninfo_unexecuted_blocks=1 00:12:45.986 00:12:45.986 ' 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:45.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.986 --rc genhtml_branch_coverage=1 00:12:45.986 --rc genhtml_function_coverage=1 00:12:45.986 --rc genhtml_legend=1 00:12:45.986 --rc geninfo_all_blocks=1 00:12:45.986 --rc geninfo_unexecuted_blocks=1 00:12:45.986 00:12:45.986 ' 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:45.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.986 --rc genhtml_branch_coverage=1 00:12:45.986 --rc genhtml_function_coverage=1 00:12:45.986 --rc genhtml_legend=1 00:12:45.986 --rc geninfo_all_blocks=1 00:12:45.986 --rc geninfo_unexecuted_blocks=1 00:12:45.986 00:12:45.986 ' 00:12:45.986 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:45.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.987 --rc genhtml_branch_coverage=1 00:12:45.987 --rc genhtml_function_coverage=1 00:12:45.987 --rc genhtml_legend=1 00:12:45.987 --rc geninfo_all_blocks=1 00:12:45.987 --rc geninfo_unexecuted_blocks=1 00:12:45.987 00:12:45.987 ' 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.987 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:54.125 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:54.125 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:54.125 Found net devices under 0000:31:00.0: cvl_0_0 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:54.125 Found net devices under 0000:31:00.1: cvl_0_1 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:12:54.125 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:12:54.126 00:12:54.126 --- 10.0.0.2 ping statistics --- 00:12:54.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.126 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:12:54.126 00:12:54.126 --- 10.0.0.1 ping statistics --- 00:12:54.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.126 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1148701 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1148701 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1148701 ']' 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.126 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.126 [2024-10-08 18:28:47.624869] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:12:54.126 [2024-10-08 18:28:47.624958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.126 [2024-10-08 18:28:47.716363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.126 [2024-10-08 18:28:47.810073] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.126 [2024-10-08 18:28:47.810132] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.126 [2024-10-08 18:28:47.810141] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.126 [2024-10-08 18:28:47.810150] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.126 [2024-10-08 18:28:47.810157] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.126 [2024-10-08 18:28:47.812261] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.126 [2024-10-08 18:28:47.812422] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.126 [2024-10-08 18:28:47.812585] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.126 [2024-10-08 18:28:47.812586] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.390 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.390 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:12:54.652 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:54.652 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:54.652 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.652 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.652 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:54.652 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31517 00:12:54.652 [2024-10-08 18:28:48.662241] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:54.652 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:54.652 { 00:12:54.652 "nqn": "nqn.2016-06.io.spdk:cnode31517", 00:12:54.652 "tgt_name": "foobar", 00:12:54.652 "method": "nvmf_create_subsystem", 00:12:54.652 "req_id": 1 00:12:54.652 } 00:12:54.652 Got JSON-RPC error response 00:12:54.652 response: 00:12:54.652 { 00:12:54.652 "code": -32603, 00:12:54.652 "message": "Unable to find target foobar" 00:12:54.652 }' 00:12:54.652 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:54.652 { 00:12:54.652 "nqn": "nqn.2016-06.io.spdk:cnode31517", 00:12:54.652 "tgt_name": "foobar", 00:12:54.652 "method": "nvmf_create_subsystem", 00:12:54.652 "req_id": 1 00:12:54.652 } 00:12:54.652 Got JSON-RPC error response 00:12:54.652 response: 00:12:54.652 { 00:12:54.652 "code": -32603, 00:12:54.652 "message": "Unable to find target foobar" 00:12:54.652 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:54.652 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:54.652 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17572 00:12:54.913 [2024-10-08 18:28:48.871126] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17572: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:54.913 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:54.913 { 00:12:54.913 "nqn": "nqn.2016-06.io.spdk:cnode17572", 00:12:54.913 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.913 "method": "nvmf_create_subsystem", 00:12:54.913 "req_id": 1 00:12:54.913 } 00:12:54.913 Got JSON-RPC error response 00:12:54.913 response: 00:12:54.913 { 00:12:54.913 "code": -32602, 00:12:54.913 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.913 }' 00:12:54.913 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:54.913 { 00:12:54.913 "nqn": "nqn.2016-06.io.spdk:cnode17572", 00:12:54.913 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.913 "method": "nvmf_create_subsystem", 00:12:54.913 "req_id": 1 00:12:54.913 } 00:12:54.913 Got JSON-RPC error response 00:12:54.913 response: 00:12:54.913 { 00:12:54.913 "code": -32602, 00:12:54.913 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.913 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.913 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:54.913 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11293 00:12:55.174 [2024-10-08 18:28:49.079849] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11293: invalid model number 'SPDK_Controller' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:55.174 { 00:12:55.174 "nqn": "nqn.2016-06.io.spdk:cnode11293", 00:12:55.174 "model_number": "SPDK_Controller\u001f", 00:12:55.174 "method": "nvmf_create_subsystem", 00:12:55.174 "req_id": 1 00:12:55.174 } 00:12:55.174 Got JSON-RPC error response 00:12:55.174 response: 00:12:55.174 { 00:12:55.174 "code": -32602, 00:12:55.174 "message": "Invalid MN SPDK_Controller\u001f" 00:12:55.174 }' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:55.174 { 00:12:55.174 "nqn": "nqn.2016-06.io.spdk:cnode11293", 00:12:55.174 "model_number": "SPDK_Controller\u001f", 00:12:55.174 "method": "nvmf_create_subsystem", 00:12:55.174 "req_id": 1 00:12:55.174 } 00:12:55.174 Got JSON-RPC error response 00:12:55.174 response: 00:12:55.174 { 00:12:55.174 "code": -32602, 00:12:55.174 "message": "Invalid MN SPDK_Controller\u001f" 00:12:55.174 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.174 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ X == \- ]] 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'X83v,p& wc(c,&7F}lux=' 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'X83v,p& wc(c,&7F}lux=' nqn.2016-06.io.spdk:cnode19230 00:12:55.435 [2024-10-08 18:28:49.445271] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19230: invalid serial number 'X83v,p& wc(c,&7F}lux=' 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:55.435 { 00:12:55.435 "nqn": "nqn.2016-06.io.spdk:cnode19230", 00:12:55.435 "serial_number": "X83v,p& wc(c,&7F}lux=", 00:12:55.435 "method": "nvmf_create_subsystem", 00:12:55.435 "req_id": 1 00:12:55.435 } 00:12:55.435 Got JSON-RPC error response 00:12:55.435 response: 00:12:55.435 { 00:12:55.435 "code": -32602, 00:12:55.435 "message": "Invalid SN X83v,p& wc(c,&7F}lux=" 00:12:55.435 }' 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:55.435 { 00:12:55.435 "nqn": "nqn.2016-06.io.spdk:cnode19230", 00:12:55.435 "serial_number": "X83v,p& wc(c,&7F}lux=", 00:12:55.435 "method": "nvmf_create_subsystem", 00:12:55.435 "req_id": 1 00:12:55.435 } 00:12:55.435 Got JSON-RPC error response 00:12:55.435 response: 00:12:55.435 { 00:12:55.435 "code": -32602, 00:12:55.435 "message": "Invalid SN X83v,p& wc(c,&7F}lux=" 00:12:55.435 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.435 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.696 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:55.697 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.698 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=3EouN3fA8D!3B|~~ip\!h!@IQt{c~Pu)" d"I@9`' 00:12:55.958 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '=3EouN3fA8D!3B|~~ip\!h!@IQt{c~Pu)" d"I@9`' nqn.2016-06.io.spdk:cnode27425 00:12:55.958 [2024-10-08 18:28:49.971129] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27425: invalid model number '=3EouN3fA8D!3B|~~ip\!h!@IQt{c~Pu)" d"I@9`' 00:12:55.958 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:55.958 { 00:12:55.958 "nqn": "nqn.2016-06.io.spdk:cnode27425", 00:12:55.958 "model_number": "=3EouN3fA8D!3B|~~ip\\!h!@IQt{c~Pu)\" d\"I@9`", 00:12:55.958 "method": "nvmf_create_subsystem", 00:12:55.958 "req_id": 1 00:12:55.958 } 00:12:55.958 Got JSON-RPC error response 00:12:55.958 response: 00:12:55.958 { 00:12:55.958 "code": -32602, 00:12:55.958 "message": "Invalid MN =3EouN3fA8D!3B|~~ip\\!h!@IQt{c~Pu)\" d\"I@9`" 00:12:55.958 }' 00:12:55.958 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:55.958 { 00:12:55.958 "nqn": "nqn.2016-06.io.spdk:cnode27425", 00:12:55.958 "model_number": "=3EouN3fA8D!3B|~~ip\\!h!@IQt{c~Pu)\" d\"I@9`", 00:12:55.958 "method": "nvmf_create_subsystem", 00:12:55.958 "req_id": 1 00:12:55.958 } 00:12:55.958 Got JSON-RPC error response 00:12:55.958 response: 00:12:55.958 { 00:12:55.958 "code": -32602, 00:12:55.958 "message": "Invalid MN =3EouN3fA8D!3B|~~ip\\!h!@IQt{c~Pu)\" d\"I@9`" 00:12:55.958 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:55.958 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:56.219 [2024-10-08 18:28:50.159805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.219 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:56.478 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:56.479 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:56.479 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:56.479 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:56.479 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:56.479 [2024-10-08 18:28:50.526473] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:56.738 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:56.738 { 00:12:56.738 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.738 "listen_address": { 00:12:56.738 "trtype": "tcp", 00:12:56.738 "traddr": "", 00:12:56.738 "trsvcid": "4421" 00:12:56.738 }, 00:12:56.738 "method": "nvmf_subsystem_remove_listener", 00:12:56.738 "req_id": 1 00:12:56.738 } 00:12:56.738 Got JSON-RPC error response 00:12:56.738 response: 00:12:56.738 { 00:12:56.738 "code": -32602, 00:12:56.738 "message": "Invalid parameters" 00:12:56.738 }' 00:12:56.738 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:56.738 { 00:12:56.738 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.738 "listen_address": { 00:12:56.738 "trtype": "tcp", 00:12:56.738 "traddr": "", 00:12:56.738 "trsvcid": "4421" 00:12:56.738 }, 00:12:56.738 "method": "nvmf_subsystem_remove_listener", 00:12:56.738 "req_id": 1 00:12:56.738 } 00:12:56.738 Got JSON-RPC error response 00:12:56.738 response: 00:12:56.738 { 00:12:56.738 "code": -32602, 00:12:56.738 "message": "Invalid parameters" 00:12:56.738 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:56.738 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16418 -i 0 00:12:56.738 [2024-10-08 18:28:50.715076] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16418: invalid cntlid range [0-65519] 00:12:56.738 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:56.738 { 00:12:56.738 "nqn": "nqn.2016-06.io.spdk:cnode16418", 00:12:56.738 "min_cntlid": 0, 00:12:56.738 "method": "nvmf_create_subsystem", 00:12:56.738 "req_id": 1 00:12:56.738 } 00:12:56.738 Got JSON-RPC error response 00:12:56.738 response: 00:12:56.738 { 00:12:56.738 "code": -32602, 00:12:56.738 "message": "Invalid cntlid range [0-65519]" 00:12:56.738 }' 00:12:56.738 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:56.738 { 00:12:56.738 "nqn": "nqn.2016-06.io.spdk:cnode16418", 00:12:56.738 "min_cntlid": 0, 00:12:56.738 "method": "nvmf_create_subsystem", 00:12:56.738 "req_id": 1 00:12:56.738 } 00:12:56.738 Got JSON-RPC error response 00:12:56.738 response: 00:12:56.738 { 00:12:56.738 "code": -32602, 00:12:56.738 "message": "Invalid cntlid range [0-65519]" 00:12:56.738 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.738 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4112 -i 65520 00:12:56.999 [2024-10-08 18:28:50.903721] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4112: invalid cntlid range [65520-65519] 00:12:56.999 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:56.999 { 00:12:56.999 "nqn": "nqn.2016-06.io.spdk:cnode4112", 00:12:56.999 "min_cntlid": 65520, 00:12:56.999 "method": "nvmf_create_subsystem", 00:12:56.999 "req_id": 1 00:12:56.999 } 00:12:56.999 Got JSON-RPC error response 00:12:56.999 response: 00:12:56.999 { 00:12:56.999 "code": -32602, 00:12:56.999 "message": "Invalid cntlid range [65520-65519]" 00:12:56.999 }' 00:12:56.999 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:56.999 { 00:12:56.999 "nqn": "nqn.2016-06.io.spdk:cnode4112", 00:12:56.999 "min_cntlid": 65520, 00:12:56.999 "method": "nvmf_create_subsystem", 00:12:56.999 "req_id": 1 00:12:56.999 } 00:12:56.999 Got JSON-RPC error response 00:12:56.999 response: 00:12:56.999 { 00:12:56.999 "code": -32602, 00:12:56.999 "message": "Invalid cntlid range [65520-65519]" 00:12:56.999 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.999 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2194 -I 0 00:12:57.259 [2024-10-08 18:28:51.092271] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2194: invalid cntlid range [1-0] 00:12:57.259 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:57.259 { 00:12:57.259 "nqn": "nqn.2016-06.io.spdk:cnode2194", 00:12:57.259 "max_cntlid": 0, 00:12:57.259 "method": "nvmf_create_subsystem", 00:12:57.259 "req_id": 1 00:12:57.259 } 00:12:57.259 Got JSON-RPC error response 00:12:57.259 response: 00:12:57.259 { 00:12:57.259 "code": -32602, 00:12:57.259 "message": "Invalid cntlid range [1-0]" 00:12:57.259 }' 00:12:57.259 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:57.259 { 00:12:57.259 "nqn": "nqn.2016-06.io.spdk:cnode2194", 00:12:57.259 "max_cntlid": 0, 00:12:57.259 "method": "nvmf_create_subsystem", 00:12:57.259 "req_id": 1 00:12:57.259 } 00:12:57.259 Got JSON-RPC error response 00:12:57.259 response: 00:12:57.259 { 00:12:57.259 "code": -32602, 00:12:57.259 "message": "Invalid cntlid range [1-0]" 00:12:57.259 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.259 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3127 -I 65520 00:12:57.259 [2024-10-08 18:28:51.280897] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3127: invalid cntlid range [1-65520] 00:12:57.259 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:57.259 { 00:12:57.259 "nqn": "nqn.2016-06.io.spdk:cnode3127", 00:12:57.259 "max_cntlid": 65520, 00:12:57.259 "method": "nvmf_create_subsystem", 00:12:57.259 "req_id": 1 00:12:57.259 } 00:12:57.259 Got JSON-RPC error response 00:12:57.259 response: 00:12:57.259 { 00:12:57.259 "code": -32602, 00:12:57.259 "message": "Invalid cntlid range [1-65520]" 00:12:57.259 }' 00:12:57.259 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:57.259 { 00:12:57.259 "nqn": "nqn.2016-06.io.spdk:cnode3127", 00:12:57.259 "max_cntlid": 65520, 00:12:57.259 "method": "nvmf_create_subsystem", 00:12:57.259 "req_id": 1 00:12:57.259 } 00:12:57.259 Got JSON-RPC error response 00:12:57.259 response: 00:12:57.259 { 00:12:57.259 "code": -32602, 00:12:57.259 "message": "Invalid cntlid range [1-65520]" 00:12:57.259 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.520 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -i 6 -I 5 00:12:57.520 [2024-10-08 18:28:51.469540] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11: invalid cntlid range [6-5] 00:12:57.520 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:57.520 { 00:12:57.520 "nqn": "nqn.2016-06.io.spdk:cnode11", 00:12:57.520 "min_cntlid": 6, 00:12:57.520 "max_cntlid": 5, 00:12:57.520 "method": "nvmf_create_subsystem", 00:12:57.520 "req_id": 1 00:12:57.520 } 00:12:57.520 Got JSON-RPC error response 00:12:57.520 response: 00:12:57.520 { 00:12:57.520 "code": -32602, 00:12:57.520 "message": "Invalid cntlid range [6-5]" 00:12:57.520 }' 00:12:57.520 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:57.520 { 00:12:57.520 "nqn": "nqn.2016-06.io.spdk:cnode11", 00:12:57.520 "min_cntlid": 6, 00:12:57.520 "max_cntlid": 5, 00:12:57.520 "method": "nvmf_create_subsystem", 00:12:57.520 "req_id": 1 00:12:57.520 } 00:12:57.520 Got JSON-RPC error response 00:12:57.520 response: 00:12:57.520 { 00:12:57.520 "code": -32602, 00:12:57.520 "message": "Invalid cntlid range [6-5]" 00:12:57.520 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.520 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:57.780 { 00:12:57.780 "name": "foobar", 00:12:57.780 "method": "nvmf_delete_target", 00:12:57.780 "req_id": 1 00:12:57.780 } 00:12:57.780 Got JSON-RPC error response 00:12:57.780 response: 00:12:57.780 { 00:12:57.780 "code": -32602, 00:12:57.780 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:57.780 }' 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:57.780 { 00:12:57.780 "name": "foobar", 00:12:57.780 "method": "nvmf_delete_target", 00:12:57.780 "req_id": 1 00:12:57.780 } 00:12:57.780 Got JSON-RPC error response 00:12:57.780 response: 00:12:57.780 { 00:12:57.780 "code": -32602, 00:12:57.780 "message": "The specified target doesn't exist, cannot delete it." 00:12:57.780 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.780 rmmod nvme_tcp 00:12:57.780 rmmod nvme_fabrics 00:12:57.780 rmmod nvme_keyring 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1148701 ']' 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1148701 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1148701 ']' 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1148701 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:12:57.780 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.781 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1148701 00:12:57.781 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:57.781 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:57.781 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1148701' 00:12:57.781 killing process with pid 1148701 00:12:57.781 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1148701 00:12:57.781 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1148701 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.041 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.955 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.955 00:12:59.955 real 0m14.335s 00:12:59.955 user 0m20.906s 00:12:59.955 sys 0m6.865s 00:12:59.955 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.955 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.955 ************************************ 00:12:59.955 END TEST nvmf_invalid 00:12:59.955 ************************************ 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.216 ************************************ 00:13:00.216 START TEST nvmf_connect_stress 00:13:00.216 ************************************ 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:00.216 * Looking for test storage... 00:13:00.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.216 --rc genhtml_branch_coverage=1 00:13:00.216 --rc genhtml_function_coverage=1 00:13:00.216 --rc genhtml_legend=1 00:13:00.216 --rc geninfo_all_blocks=1 00:13:00.216 --rc geninfo_unexecuted_blocks=1 00:13:00.216 00:13:00.216 ' 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.216 --rc genhtml_branch_coverage=1 00:13:00.216 --rc genhtml_function_coverage=1 00:13:00.216 --rc genhtml_legend=1 00:13:00.216 --rc geninfo_all_blocks=1 00:13:00.216 --rc geninfo_unexecuted_blocks=1 00:13:00.216 00:13:00.216 ' 00:13:00.216 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.216 --rc genhtml_branch_coverage=1 00:13:00.216 --rc genhtml_function_coverage=1 00:13:00.216 --rc genhtml_legend=1 00:13:00.216 --rc geninfo_all_blocks=1 00:13:00.216 --rc geninfo_unexecuted_blocks=1 00:13:00.216 00:13:00.216 ' 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:00.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.217 --rc genhtml_branch_coverage=1 00:13:00.217 --rc genhtml_function_coverage=1 00:13:00.217 --rc genhtml_legend=1 00:13:00.217 --rc geninfo_all_blocks=1 00:13:00.217 --rc geninfo_unexecuted_blocks=1 00:13:00.217 00:13:00.217 ' 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.217 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:08.624 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:08.624 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:08.624 Found net devices under 0000:31:00.0: cvl_0_0 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:08.624 Found net devices under 0000:31:00.1: cvl_0_1 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:08.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:13:08.624 00:13:08.624 --- 10.0.0.2 ping statistics --- 00:13:08.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.624 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:13:08.624 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:13:08.624 00:13:08.624 --- 10.0.0.1 ping statistics --- 00:13:08.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.625 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1153950 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1153950 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1153950 ']' 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.625 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.625 [2024-10-08 18:29:02.043524] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:13:08.625 [2024-10-08 18:29:02.043588] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.625 [2024-10-08 18:29:02.136701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:08.625 [2024-10-08 18:29:02.230718] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.625 [2024-10-08 18:29:02.230776] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.625 [2024-10-08 18:29:02.230785] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.625 [2024-10-08 18:29:02.230792] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.625 [2024-10-08 18:29:02.230799] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.625 [2024-10-08 18:29:02.232159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.625 [2024-10-08 18:29:02.232399] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.625 [2024-10-08 18:29:02.232400] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.886 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.886 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:08.886 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:08.886 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:08.886 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.886 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.886 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:08.886 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.886 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.886 [2024-10-08 18:29:02.922765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.148 [2024-10-08 18:29:02.968988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.148 NULL1 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1154137 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.148 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.149 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.149 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.149 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.149 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.149 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:09.149 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:09.149 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:09.149 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.149 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.149 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.410 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.410 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:09.410 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.410 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.410 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.982 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.982 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:09.982 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.982 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.982 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.242 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.242 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:10.242 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.242 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.242 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.503 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.503 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:10.503 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.503 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.503 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.763 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.763 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:10.763 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.763 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.763 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.024 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.024 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:11.024 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.024 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.024 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.595 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:11.595 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.595 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.595 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.855 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.855 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:11.855 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.855 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.855 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.116 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.116 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:12.116 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.116 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.116 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.376 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.376 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:12.376 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.376 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.376 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.636 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.636 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:12.636 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.636 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.636 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.206 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.206 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:13.206 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.206 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.206 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.466 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.466 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:13.466 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.466 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.466 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.725 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.725 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:13.725 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.725 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.725 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.986 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.986 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:13.986 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.986 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.986 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.247 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.247 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:14.247 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.247 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.507 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.767 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.767 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:14.767 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.767 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.767 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.027 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.027 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:15.027 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.027 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.027 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.286 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.287 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:15.287 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.287 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.287 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.546 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.546 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:15.546 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.547 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.807 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.067 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.067 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:16.067 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.067 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.067 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.329 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.329 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:16.329 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.329 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.329 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.590 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.590 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:16.590 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.590 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.590 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.161 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.161 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:17.161 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.161 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.161 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.422 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.422 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:17.422 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.422 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.422 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.682 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.682 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:17.682 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.682 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.682 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.943 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.943 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:17.943 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.943 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.943 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.203 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.203 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:18.203 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.203 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.203 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.774 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.774 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:18.774 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.774 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.774 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.035 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.035 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:19.035 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.035 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.035 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.297 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1154137 00:13:19.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1154137) - No such process 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1154137 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.297 rmmod nvme_tcp 00:13:19.297 rmmod nvme_fabrics 00:13:19.297 rmmod nvme_keyring 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1153950 ']' 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1153950 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1153950 ']' 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1153950 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.297 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1153950 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1153950' 00:13:19.558 killing process with pid 1153950 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1153950 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1153950 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.558 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:22.104 00:13:22.104 real 0m21.515s 00:13:22.104 user 0m42.321s 00:13:22.104 sys 0m9.471s 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.104 ************************************ 00:13:22.104 END TEST nvmf_connect_stress 00:13:22.104 ************************************ 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.104 ************************************ 00:13:22.104 START TEST nvmf_fused_ordering 00:13:22.104 ************************************ 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:22.104 * Looking for test storage... 00:13:22.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:22.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.104 --rc genhtml_branch_coverage=1 00:13:22.104 --rc genhtml_function_coverage=1 00:13:22.104 --rc genhtml_legend=1 00:13:22.104 --rc geninfo_all_blocks=1 00:13:22.104 --rc geninfo_unexecuted_blocks=1 00:13:22.104 00:13:22.104 ' 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:22.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.104 --rc genhtml_branch_coverage=1 00:13:22.104 --rc genhtml_function_coverage=1 00:13:22.104 --rc genhtml_legend=1 00:13:22.104 --rc geninfo_all_blocks=1 00:13:22.104 --rc geninfo_unexecuted_blocks=1 00:13:22.104 00:13:22.104 ' 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:22.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.104 --rc genhtml_branch_coverage=1 00:13:22.104 --rc genhtml_function_coverage=1 00:13:22.104 --rc genhtml_legend=1 00:13:22.104 --rc geninfo_all_blocks=1 00:13:22.104 --rc geninfo_unexecuted_blocks=1 00:13:22.104 00:13:22.104 ' 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:22.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.104 --rc genhtml_branch_coverage=1 00:13:22.104 --rc genhtml_function_coverage=1 00:13:22.104 --rc genhtml_legend=1 00:13:22.104 --rc geninfo_all_blocks=1 00:13:22.104 --rc geninfo_unexecuted_blocks=1 00:13:22.104 00:13:22.104 ' 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.104 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:22.105 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.251 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:30.252 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:30.252 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:30.252 Found net devices under 0000:31:00.0: cvl_0_0 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:30.252 Found net devices under 0000:31:00.1: cvl_0_1 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.717 ms 00:13:30.252 00:13:30.252 --- 10.0.0.2 ping statistics --- 00:13:30.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.252 rtt min/avg/max/mdev = 0.717/0.717/0.717/0.000 ms 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:13:30.252 00:13:30.252 --- 10.0.0.1 ping statistics --- 00:13:30.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.252 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1160402 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1160402 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1160402 ']' 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.252 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.252 [2024-10-08 18:29:23.613840] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:13:30.252 [2024-10-08 18:29:23.613905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.253 [2024-10-08 18:29:23.704253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.253 [2024-10-08 18:29:23.796834] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.253 [2024-10-08 18:29:23.796892] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.253 [2024-10-08 18:29:23.796901] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.253 [2024-10-08 18:29:23.796908] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.253 [2024-10-08 18:29:23.796914] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.253 [2024-10-08 18:29:23.797710] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.515 [2024-10-08 18:29:24.478062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.515 [2024-10-08 18:29:24.502352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.515 NULL1 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.515 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:30.776 [2024-10-08 18:29:24.572451] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:13:30.776 [2024-10-08 18:29:24.572517] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160751 ] 00:13:31.037 Attached to nqn.2016-06.io.spdk:cnode1 00:13:31.037 Namespace ID: 1 size: 1GB 00:13:31.037 fused_ordering(0) 00:13:31.037 fused_ordering(1) 00:13:31.037 fused_ordering(2) 00:13:31.037 fused_ordering(3) 00:13:31.037 fused_ordering(4) 00:13:31.037 fused_ordering(5) 00:13:31.037 fused_ordering(6) 00:13:31.037 fused_ordering(7) 00:13:31.037 fused_ordering(8) 00:13:31.037 fused_ordering(9) 00:13:31.037 fused_ordering(10) 00:13:31.037 fused_ordering(11) 00:13:31.037 fused_ordering(12) 00:13:31.037 fused_ordering(13) 00:13:31.037 fused_ordering(14) 00:13:31.037 fused_ordering(15) 00:13:31.037 fused_ordering(16) 00:13:31.037 fused_ordering(17) 00:13:31.037 fused_ordering(18) 00:13:31.037 fused_ordering(19) 00:13:31.037 fused_ordering(20) 00:13:31.037 fused_ordering(21) 00:13:31.037 fused_ordering(22) 00:13:31.037 fused_ordering(23) 00:13:31.037 fused_ordering(24) 00:13:31.037 fused_ordering(25) 00:13:31.037 fused_ordering(26) 00:13:31.037 fused_ordering(27) 00:13:31.037 fused_ordering(28) 00:13:31.037 fused_ordering(29) 00:13:31.037 fused_ordering(30) 00:13:31.037 fused_ordering(31) 00:13:31.037 fused_ordering(32) 00:13:31.037 fused_ordering(33) 00:13:31.037 fused_ordering(34) 00:13:31.037 fused_ordering(35) 00:13:31.037 fused_ordering(36) 00:13:31.037 fused_ordering(37) 00:13:31.037 fused_ordering(38) 00:13:31.037 fused_ordering(39) 00:13:31.037 fused_ordering(40) 00:13:31.037 fused_ordering(41) 00:13:31.037 fused_ordering(42) 00:13:31.037 fused_ordering(43) 00:13:31.037 fused_ordering(44) 00:13:31.037 fused_ordering(45) 00:13:31.037 fused_ordering(46) 00:13:31.037 fused_ordering(47) 00:13:31.037 fused_ordering(48) 00:13:31.037 fused_ordering(49) 00:13:31.037 fused_ordering(50) 00:13:31.037 fused_ordering(51) 00:13:31.037 fused_ordering(52) 00:13:31.037 fused_ordering(53) 00:13:31.037 fused_ordering(54) 00:13:31.037 fused_ordering(55) 00:13:31.037 fused_ordering(56) 00:13:31.037 fused_ordering(57) 00:13:31.037 fused_ordering(58) 00:13:31.037 fused_ordering(59) 00:13:31.037 fused_ordering(60) 00:13:31.037 fused_ordering(61) 00:13:31.037 fused_ordering(62) 00:13:31.037 fused_ordering(63) 00:13:31.037 fused_ordering(64) 00:13:31.037 fused_ordering(65) 00:13:31.037 fused_ordering(66) 00:13:31.037 fused_ordering(67) 00:13:31.037 fused_ordering(68) 00:13:31.037 fused_ordering(69) 00:13:31.037 fused_ordering(70) 00:13:31.037 fused_ordering(71) 00:13:31.037 fused_ordering(72) 00:13:31.037 fused_ordering(73) 00:13:31.037 fused_ordering(74) 00:13:31.037 fused_ordering(75) 00:13:31.037 fused_ordering(76) 00:13:31.037 fused_ordering(77) 00:13:31.037 fused_ordering(78) 00:13:31.037 fused_ordering(79) 00:13:31.037 fused_ordering(80) 00:13:31.037 fused_ordering(81) 00:13:31.037 fused_ordering(82) 00:13:31.037 fused_ordering(83) 00:13:31.037 fused_ordering(84) 00:13:31.037 fused_ordering(85) 00:13:31.037 fused_ordering(86) 00:13:31.037 fused_ordering(87) 00:13:31.037 fused_ordering(88) 00:13:31.037 fused_ordering(89) 00:13:31.038 fused_ordering(90) 00:13:31.038 fused_ordering(91) 00:13:31.038 fused_ordering(92) 00:13:31.038 fused_ordering(93) 00:13:31.038 fused_ordering(94) 00:13:31.038 fused_ordering(95) 00:13:31.038 fused_ordering(96) 00:13:31.038 fused_ordering(97) 00:13:31.038 fused_ordering(98) 00:13:31.038 fused_ordering(99) 00:13:31.038 fused_ordering(100) 00:13:31.038 fused_ordering(101) 00:13:31.038 fused_ordering(102) 00:13:31.038 fused_ordering(103) 00:13:31.038 fused_ordering(104) 00:13:31.038 fused_ordering(105) 00:13:31.038 fused_ordering(106) 00:13:31.038 fused_ordering(107) 00:13:31.038 fused_ordering(108) 00:13:31.038 fused_ordering(109) 00:13:31.038 fused_ordering(110) 00:13:31.038 fused_ordering(111) 00:13:31.038 fused_ordering(112) 00:13:31.038 fused_ordering(113) 00:13:31.038 fused_ordering(114) 00:13:31.038 fused_ordering(115) 00:13:31.038 fused_ordering(116) 00:13:31.038 fused_ordering(117) 00:13:31.038 fused_ordering(118) 00:13:31.038 fused_ordering(119) 00:13:31.038 fused_ordering(120) 00:13:31.038 fused_ordering(121) 00:13:31.038 fused_ordering(122) 00:13:31.038 fused_ordering(123) 00:13:31.038 fused_ordering(124) 00:13:31.038 fused_ordering(125) 00:13:31.038 fused_ordering(126) 00:13:31.038 fused_ordering(127) 00:13:31.038 fused_ordering(128) 00:13:31.038 fused_ordering(129) 00:13:31.038 fused_ordering(130) 00:13:31.038 fused_ordering(131) 00:13:31.038 fused_ordering(132) 00:13:31.038 fused_ordering(133) 00:13:31.038 fused_ordering(134) 00:13:31.038 fused_ordering(135) 00:13:31.038 fused_ordering(136) 00:13:31.038 fused_ordering(137) 00:13:31.038 fused_ordering(138) 00:13:31.038 fused_ordering(139) 00:13:31.038 fused_ordering(140) 00:13:31.038 fused_ordering(141) 00:13:31.038 fused_ordering(142) 00:13:31.038 fused_ordering(143) 00:13:31.038 fused_ordering(144) 00:13:31.038 fused_ordering(145) 00:13:31.038 fused_ordering(146) 00:13:31.038 fused_ordering(147) 00:13:31.038 fused_ordering(148) 00:13:31.038 fused_ordering(149) 00:13:31.038 fused_ordering(150) 00:13:31.038 fused_ordering(151) 00:13:31.038 fused_ordering(152) 00:13:31.038 fused_ordering(153) 00:13:31.038 fused_ordering(154) 00:13:31.038 fused_ordering(155) 00:13:31.038 fused_ordering(156) 00:13:31.038 fused_ordering(157) 00:13:31.038 fused_ordering(158) 00:13:31.038 fused_ordering(159) 00:13:31.038 fused_ordering(160) 00:13:31.038 fused_ordering(161) 00:13:31.038 fused_ordering(162) 00:13:31.038 fused_ordering(163) 00:13:31.038 fused_ordering(164) 00:13:31.038 fused_ordering(165) 00:13:31.038 fused_ordering(166) 00:13:31.038 fused_ordering(167) 00:13:31.038 fused_ordering(168) 00:13:31.038 fused_ordering(169) 00:13:31.038 fused_ordering(170) 00:13:31.038 fused_ordering(171) 00:13:31.038 fused_ordering(172) 00:13:31.038 fused_ordering(173) 00:13:31.038 fused_ordering(174) 00:13:31.038 fused_ordering(175) 00:13:31.038 fused_ordering(176) 00:13:31.038 fused_ordering(177) 00:13:31.038 fused_ordering(178) 00:13:31.038 fused_ordering(179) 00:13:31.038 fused_ordering(180) 00:13:31.038 fused_ordering(181) 00:13:31.038 fused_ordering(182) 00:13:31.038 fused_ordering(183) 00:13:31.038 fused_ordering(184) 00:13:31.038 fused_ordering(185) 00:13:31.038 fused_ordering(186) 00:13:31.038 fused_ordering(187) 00:13:31.038 fused_ordering(188) 00:13:31.038 fused_ordering(189) 00:13:31.038 fused_ordering(190) 00:13:31.038 fused_ordering(191) 00:13:31.038 fused_ordering(192) 00:13:31.038 fused_ordering(193) 00:13:31.038 fused_ordering(194) 00:13:31.038 fused_ordering(195) 00:13:31.038 fused_ordering(196) 00:13:31.038 fused_ordering(197) 00:13:31.038 fused_ordering(198) 00:13:31.038 fused_ordering(199) 00:13:31.038 fused_ordering(200) 00:13:31.038 fused_ordering(201) 00:13:31.038 fused_ordering(202) 00:13:31.038 fused_ordering(203) 00:13:31.038 fused_ordering(204) 00:13:31.038 fused_ordering(205) 00:13:31.611 fused_ordering(206) 00:13:31.611 fused_ordering(207) 00:13:31.611 fused_ordering(208) 00:13:31.611 fused_ordering(209) 00:13:31.611 fused_ordering(210) 00:13:31.611 fused_ordering(211) 00:13:31.611 fused_ordering(212) 00:13:31.611 fused_ordering(213) 00:13:31.611 fused_ordering(214) 00:13:31.611 fused_ordering(215) 00:13:31.611 fused_ordering(216) 00:13:31.611 fused_ordering(217) 00:13:31.611 fused_ordering(218) 00:13:31.611 fused_ordering(219) 00:13:31.611 fused_ordering(220) 00:13:31.611 fused_ordering(221) 00:13:31.611 fused_ordering(222) 00:13:31.611 fused_ordering(223) 00:13:31.611 fused_ordering(224) 00:13:31.611 fused_ordering(225) 00:13:31.611 fused_ordering(226) 00:13:31.611 fused_ordering(227) 00:13:31.611 fused_ordering(228) 00:13:31.611 fused_ordering(229) 00:13:31.611 fused_ordering(230) 00:13:31.611 fused_ordering(231) 00:13:31.611 fused_ordering(232) 00:13:31.611 fused_ordering(233) 00:13:31.611 fused_ordering(234) 00:13:31.611 fused_ordering(235) 00:13:31.611 fused_ordering(236) 00:13:31.611 fused_ordering(237) 00:13:31.611 fused_ordering(238) 00:13:31.611 fused_ordering(239) 00:13:31.611 fused_ordering(240) 00:13:31.611 fused_ordering(241) 00:13:31.611 fused_ordering(242) 00:13:31.611 fused_ordering(243) 00:13:31.611 fused_ordering(244) 00:13:31.611 fused_ordering(245) 00:13:31.611 fused_ordering(246) 00:13:31.611 fused_ordering(247) 00:13:31.611 fused_ordering(248) 00:13:31.611 fused_ordering(249) 00:13:31.611 fused_ordering(250) 00:13:31.611 fused_ordering(251) 00:13:31.611 fused_ordering(252) 00:13:31.611 fused_ordering(253) 00:13:31.611 fused_ordering(254) 00:13:31.611 fused_ordering(255) 00:13:31.611 fused_ordering(256) 00:13:31.611 fused_ordering(257) 00:13:31.611 fused_ordering(258) 00:13:31.611 fused_ordering(259) 00:13:31.611 fused_ordering(260) 00:13:31.611 fused_ordering(261) 00:13:31.611 fused_ordering(262) 00:13:31.611 fused_ordering(263) 00:13:31.611 fused_ordering(264) 00:13:31.611 fused_ordering(265) 00:13:31.611 fused_ordering(266) 00:13:31.611 fused_ordering(267) 00:13:31.611 fused_ordering(268) 00:13:31.611 fused_ordering(269) 00:13:31.611 fused_ordering(270) 00:13:31.611 fused_ordering(271) 00:13:31.611 fused_ordering(272) 00:13:31.611 fused_ordering(273) 00:13:31.611 fused_ordering(274) 00:13:31.611 fused_ordering(275) 00:13:31.611 fused_ordering(276) 00:13:31.611 fused_ordering(277) 00:13:31.611 fused_ordering(278) 00:13:31.611 fused_ordering(279) 00:13:31.611 fused_ordering(280) 00:13:31.611 fused_ordering(281) 00:13:31.611 fused_ordering(282) 00:13:31.611 fused_ordering(283) 00:13:31.611 fused_ordering(284) 00:13:31.611 fused_ordering(285) 00:13:31.611 fused_ordering(286) 00:13:31.611 fused_ordering(287) 00:13:31.611 fused_ordering(288) 00:13:31.611 fused_ordering(289) 00:13:31.611 fused_ordering(290) 00:13:31.611 fused_ordering(291) 00:13:31.611 fused_ordering(292) 00:13:31.611 fused_ordering(293) 00:13:31.611 fused_ordering(294) 00:13:31.611 fused_ordering(295) 00:13:31.611 fused_ordering(296) 00:13:31.611 fused_ordering(297) 00:13:31.611 fused_ordering(298) 00:13:31.611 fused_ordering(299) 00:13:31.611 fused_ordering(300) 00:13:31.611 fused_ordering(301) 00:13:31.611 fused_ordering(302) 00:13:31.611 fused_ordering(303) 00:13:31.611 fused_ordering(304) 00:13:31.611 fused_ordering(305) 00:13:31.611 fused_ordering(306) 00:13:31.611 fused_ordering(307) 00:13:31.611 fused_ordering(308) 00:13:31.611 fused_ordering(309) 00:13:31.611 fused_ordering(310) 00:13:31.611 fused_ordering(311) 00:13:31.611 fused_ordering(312) 00:13:31.611 fused_ordering(313) 00:13:31.611 fused_ordering(314) 00:13:31.611 fused_ordering(315) 00:13:31.611 fused_ordering(316) 00:13:31.611 fused_ordering(317) 00:13:31.611 fused_ordering(318) 00:13:31.611 fused_ordering(319) 00:13:31.611 fused_ordering(320) 00:13:31.611 fused_ordering(321) 00:13:31.611 fused_ordering(322) 00:13:31.611 fused_ordering(323) 00:13:31.611 fused_ordering(324) 00:13:31.611 fused_ordering(325) 00:13:31.611 fused_ordering(326) 00:13:31.611 fused_ordering(327) 00:13:31.611 fused_ordering(328) 00:13:31.611 fused_ordering(329) 00:13:31.611 fused_ordering(330) 00:13:31.611 fused_ordering(331) 00:13:31.611 fused_ordering(332) 00:13:31.611 fused_ordering(333) 00:13:31.611 fused_ordering(334) 00:13:31.611 fused_ordering(335) 00:13:31.611 fused_ordering(336) 00:13:31.611 fused_ordering(337) 00:13:31.611 fused_ordering(338) 00:13:31.611 fused_ordering(339) 00:13:31.611 fused_ordering(340) 00:13:31.611 fused_ordering(341) 00:13:31.611 fused_ordering(342) 00:13:31.611 fused_ordering(343) 00:13:31.611 fused_ordering(344) 00:13:31.611 fused_ordering(345) 00:13:31.611 fused_ordering(346) 00:13:31.611 fused_ordering(347) 00:13:31.611 fused_ordering(348) 00:13:31.611 fused_ordering(349) 00:13:31.611 fused_ordering(350) 00:13:31.611 fused_ordering(351) 00:13:31.611 fused_ordering(352) 00:13:31.611 fused_ordering(353) 00:13:31.611 fused_ordering(354) 00:13:31.611 fused_ordering(355) 00:13:31.611 fused_ordering(356) 00:13:31.611 fused_ordering(357) 00:13:31.611 fused_ordering(358) 00:13:31.611 fused_ordering(359) 00:13:31.611 fused_ordering(360) 00:13:31.611 fused_ordering(361) 00:13:31.611 fused_ordering(362) 00:13:31.611 fused_ordering(363) 00:13:31.611 fused_ordering(364) 00:13:31.611 fused_ordering(365) 00:13:31.611 fused_ordering(366) 00:13:31.611 fused_ordering(367) 00:13:31.611 fused_ordering(368) 00:13:31.611 fused_ordering(369) 00:13:31.611 fused_ordering(370) 00:13:31.611 fused_ordering(371) 00:13:31.611 fused_ordering(372) 00:13:31.611 fused_ordering(373) 00:13:31.611 fused_ordering(374) 00:13:31.611 fused_ordering(375) 00:13:31.611 fused_ordering(376) 00:13:31.611 fused_ordering(377) 00:13:31.611 fused_ordering(378) 00:13:31.611 fused_ordering(379) 00:13:31.611 fused_ordering(380) 00:13:31.611 fused_ordering(381) 00:13:31.611 fused_ordering(382) 00:13:31.611 fused_ordering(383) 00:13:31.611 fused_ordering(384) 00:13:31.611 fused_ordering(385) 00:13:31.611 fused_ordering(386) 00:13:31.611 fused_ordering(387) 00:13:31.611 fused_ordering(388) 00:13:31.611 fused_ordering(389) 00:13:31.611 fused_ordering(390) 00:13:31.611 fused_ordering(391) 00:13:31.611 fused_ordering(392) 00:13:31.611 fused_ordering(393) 00:13:31.611 fused_ordering(394) 00:13:31.611 fused_ordering(395) 00:13:31.611 fused_ordering(396) 00:13:31.611 fused_ordering(397) 00:13:31.611 fused_ordering(398) 00:13:31.611 fused_ordering(399) 00:13:31.611 fused_ordering(400) 00:13:31.611 fused_ordering(401) 00:13:31.611 fused_ordering(402) 00:13:31.611 fused_ordering(403) 00:13:31.611 fused_ordering(404) 00:13:31.611 fused_ordering(405) 00:13:31.611 fused_ordering(406) 00:13:31.611 fused_ordering(407) 00:13:31.611 fused_ordering(408) 00:13:31.611 fused_ordering(409) 00:13:31.611 fused_ordering(410) 00:13:31.873 fused_ordering(411) 00:13:31.873 fused_ordering(412) 00:13:31.873 fused_ordering(413) 00:13:31.873 fused_ordering(414) 00:13:31.873 fused_ordering(415) 00:13:31.873 fused_ordering(416) 00:13:31.873 fused_ordering(417) 00:13:31.873 fused_ordering(418) 00:13:31.873 fused_ordering(419) 00:13:31.873 fused_ordering(420) 00:13:31.873 fused_ordering(421) 00:13:31.873 fused_ordering(422) 00:13:31.873 fused_ordering(423) 00:13:31.873 fused_ordering(424) 00:13:31.873 fused_ordering(425) 00:13:31.873 fused_ordering(426) 00:13:31.873 fused_ordering(427) 00:13:31.873 fused_ordering(428) 00:13:31.873 fused_ordering(429) 00:13:31.873 fused_ordering(430) 00:13:31.873 fused_ordering(431) 00:13:31.873 fused_ordering(432) 00:13:31.873 fused_ordering(433) 00:13:31.873 fused_ordering(434) 00:13:31.873 fused_ordering(435) 00:13:31.873 fused_ordering(436) 00:13:31.873 fused_ordering(437) 00:13:31.873 fused_ordering(438) 00:13:31.873 fused_ordering(439) 00:13:31.873 fused_ordering(440) 00:13:31.873 fused_ordering(441) 00:13:31.873 fused_ordering(442) 00:13:31.873 fused_ordering(443) 00:13:31.873 fused_ordering(444) 00:13:31.873 fused_ordering(445) 00:13:31.873 fused_ordering(446) 00:13:31.873 fused_ordering(447) 00:13:31.873 fused_ordering(448) 00:13:31.873 fused_ordering(449) 00:13:31.873 fused_ordering(450) 00:13:31.873 fused_ordering(451) 00:13:31.873 fused_ordering(452) 00:13:31.873 fused_ordering(453) 00:13:31.873 fused_ordering(454) 00:13:31.873 fused_ordering(455) 00:13:31.873 fused_ordering(456) 00:13:31.873 fused_ordering(457) 00:13:31.873 fused_ordering(458) 00:13:31.873 fused_ordering(459) 00:13:31.873 fused_ordering(460) 00:13:31.873 fused_ordering(461) 00:13:31.873 fused_ordering(462) 00:13:31.873 fused_ordering(463) 00:13:31.873 fused_ordering(464) 00:13:31.873 fused_ordering(465) 00:13:31.873 fused_ordering(466) 00:13:31.873 fused_ordering(467) 00:13:31.873 fused_ordering(468) 00:13:31.873 fused_ordering(469) 00:13:31.873 fused_ordering(470) 00:13:31.873 fused_ordering(471) 00:13:31.873 fused_ordering(472) 00:13:31.873 fused_ordering(473) 00:13:31.873 fused_ordering(474) 00:13:31.873 fused_ordering(475) 00:13:31.873 fused_ordering(476) 00:13:31.873 fused_ordering(477) 00:13:31.873 fused_ordering(478) 00:13:31.873 fused_ordering(479) 00:13:31.873 fused_ordering(480) 00:13:31.873 fused_ordering(481) 00:13:31.873 fused_ordering(482) 00:13:31.873 fused_ordering(483) 00:13:31.873 fused_ordering(484) 00:13:31.873 fused_ordering(485) 00:13:31.873 fused_ordering(486) 00:13:31.873 fused_ordering(487) 00:13:31.873 fused_ordering(488) 00:13:31.873 fused_ordering(489) 00:13:31.873 fused_ordering(490) 00:13:31.873 fused_ordering(491) 00:13:31.873 fused_ordering(492) 00:13:31.873 fused_ordering(493) 00:13:31.873 fused_ordering(494) 00:13:31.873 fused_ordering(495) 00:13:31.873 fused_ordering(496) 00:13:31.873 fused_ordering(497) 00:13:31.873 fused_ordering(498) 00:13:31.873 fused_ordering(499) 00:13:31.873 fused_ordering(500) 00:13:31.873 fused_ordering(501) 00:13:31.873 fused_ordering(502) 00:13:31.873 fused_ordering(503) 00:13:31.873 fused_ordering(504) 00:13:31.873 fused_ordering(505) 00:13:31.873 fused_ordering(506) 00:13:31.873 fused_ordering(507) 00:13:31.873 fused_ordering(508) 00:13:31.873 fused_ordering(509) 00:13:31.873 fused_ordering(510) 00:13:31.873 fused_ordering(511) 00:13:31.873 fused_ordering(512) 00:13:31.873 fused_ordering(513) 00:13:31.873 fused_ordering(514) 00:13:31.873 fused_ordering(515) 00:13:31.873 fused_ordering(516) 00:13:31.873 fused_ordering(517) 00:13:31.873 fused_ordering(518) 00:13:31.873 fused_ordering(519) 00:13:31.873 fused_ordering(520) 00:13:31.873 fused_ordering(521) 00:13:31.873 fused_ordering(522) 00:13:31.873 fused_ordering(523) 00:13:31.873 fused_ordering(524) 00:13:31.873 fused_ordering(525) 00:13:31.873 fused_ordering(526) 00:13:31.873 fused_ordering(527) 00:13:31.873 fused_ordering(528) 00:13:31.873 fused_ordering(529) 00:13:31.873 fused_ordering(530) 00:13:31.873 fused_ordering(531) 00:13:31.873 fused_ordering(532) 00:13:31.873 fused_ordering(533) 00:13:31.873 fused_ordering(534) 00:13:31.873 fused_ordering(535) 00:13:31.873 fused_ordering(536) 00:13:31.873 fused_ordering(537) 00:13:31.873 fused_ordering(538) 00:13:31.873 fused_ordering(539) 00:13:31.873 fused_ordering(540) 00:13:31.873 fused_ordering(541) 00:13:31.873 fused_ordering(542) 00:13:31.873 fused_ordering(543) 00:13:31.873 fused_ordering(544) 00:13:31.873 fused_ordering(545) 00:13:31.873 fused_ordering(546) 00:13:31.873 fused_ordering(547) 00:13:31.873 fused_ordering(548) 00:13:31.873 fused_ordering(549) 00:13:31.873 fused_ordering(550) 00:13:31.873 fused_ordering(551) 00:13:31.873 fused_ordering(552) 00:13:31.873 fused_ordering(553) 00:13:31.873 fused_ordering(554) 00:13:31.873 fused_ordering(555) 00:13:31.873 fused_ordering(556) 00:13:31.873 fused_ordering(557) 00:13:31.873 fused_ordering(558) 00:13:31.873 fused_ordering(559) 00:13:31.873 fused_ordering(560) 00:13:31.873 fused_ordering(561) 00:13:31.873 fused_ordering(562) 00:13:31.873 fused_ordering(563) 00:13:31.873 fused_ordering(564) 00:13:31.873 fused_ordering(565) 00:13:31.873 fused_ordering(566) 00:13:31.873 fused_ordering(567) 00:13:31.873 fused_ordering(568) 00:13:31.873 fused_ordering(569) 00:13:31.873 fused_ordering(570) 00:13:31.873 fused_ordering(571) 00:13:31.873 fused_ordering(572) 00:13:31.873 fused_ordering(573) 00:13:31.873 fused_ordering(574) 00:13:31.873 fused_ordering(575) 00:13:31.873 fused_ordering(576) 00:13:31.873 fused_ordering(577) 00:13:31.873 fused_ordering(578) 00:13:31.873 fused_ordering(579) 00:13:31.873 fused_ordering(580) 00:13:31.873 fused_ordering(581) 00:13:31.873 fused_ordering(582) 00:13:31.873 fused_ordering(583) 00:13:31.873 fused_ordering(584) 00:13:31.873 fused_ordering(585) 00:13:31.873 fused_ordering(586) 00:13:31.873 fused_ordering(587) 00:13:31.873 fused_ordering(588) 00:13:31.873 fused_ordering(589) 00:13:31.873 fused_ordering(590) 00:13:31.873 fused_ordering(591) 00:13:31.873 fused_ordering(592) 00:13:31.873 fused_ordering(593) 00:13:31.873 fused_ordering(594) 00:13:31.873 fused_ordering(595) 00:13:31.873 fused_ordering(596) 00:13:31.873 fused_ordering(597) 00:13:31.873 fused_ordering(598) 00:13:31.873 fused_ordering(599) 00:13:31.873 fused_ordering(600) 00:13:31.873 fused_ordering(601) 00:13:31.873 fused_ordering(602) 00:13:31.873 fused_ordering(603) 00:13:31.873 fused_ordering(604) 00:13:31.873 fused_ordering(605) 00:13:31.873 fused_ordering(606) 00:13:31.873 fused_ordering(607) 00:13:31.873 fused_ordering(608) 00:13:31.873 fused_ordering(609) 00:13:31.873 fused_ordering(610) 00:13:31.873 fused_ordering(611) 00:13:31.873 fused_ordering(612) 00:13:31.873 fused_ordering(613) 00:13:31.873 fused_ordering(614) 00:13:31.873 fused_ordering(615) 00:13:32.445 fused_ordering(616) 00:13:32.445 fused_ordering(617) 00:13:32.445 fused_ordering(618) 00:13:32.445 fused_ordering(619) 00:13:32.445 fused_ordering(620) 00:13:32.445 fused_ordering(621) 00:13:32.445 fused_ordering(622) 00:13:32.445 fused_ordering(623) 00:13:32.445 fused_ordering(624) 00:13:32.445 fused_ordering(625) 00:13:32.445 fused_ordering(626) 00:13:32.445 fused_ordering(627) 00:13:32.445 fused_ordering(628) 00:13:32.445 fused_ordering(629) 00:13:32.445 fused_ordering(630) 00:13:32.445 fused_ordering(631) 00:13:32.445 fused_ordering(632) 00:13:32.445 fused_ordering(633) 00:13:32.445 fused_ordering(634) 00:13:32.445 fused_ordering(635) 00:13:32.445 fused_ordering(636) 00:13:32.445 fused_ordering(637) 00:13:32.445 fused_ordering(638) 00:13:32.445 fused_ordering(639) 00:13:32.445 fused_ordering(640) 00:13:32.445 fused_ordering(641) 00:13:32.445 fused_ordering(642) 00:13:32.445 fused_ordering(643) 00:13:32.445 fused_ordering(644) 00:13:32.445 fused_ordering(645) 00:13:32.445 fused_ordering(646) 00:13:32.445 fused_ordering(647) 00:13:32.445 fused_ordering(648) 00:13:32.445 fused_ordering(649) 00:13:32.445 fused_ordering(650) 00:13:32.445 fused_ordering(651) 00:13:32.445 fused_ordering(652) 00:13:32.445 fused_ordering(653) 00:13:32.445 fused_ordering(654) 00:13:32.445 fused_ordering(655) 00:13:32.445 fused_ordering(656) 00:13:32.445 fused_ordering(657) 00:13:32.445 fused_ordering(658) 00:13:32.445 fused_ordering(659) 00:13:32.445 fused_ordering(660) 00:13:32.445 fused_ordering(661) 00:13:32.445 fused_ordering(662) 00:13:32.445 fused_ordering(663) 00:13:32.445 fused_ordering(664) 00:13:32.445 fused_ordering(665) 00:13:32.445 fused_ordering(666) 00:13:32.445 fused_ordering(667) 00:13:32.445 fused_ordering(668) 00:13:32.445 fused_ordering(669) 00:13:32.445 fused_ordering(670) 00:13:32.445 fused_ordering(671) 00:13:32.445 fused_ordering(672) 00:13:32.445 fused_ordering(673) 00:13:32.445 fused_ordering(674) 00:13:32.445 fused_ordering(675) 00:13:32.445 fused_ordering(676) 00:13:32.445 fused_ordering(677) 00:13:32.445 fused_ordering(678) 00:13:32.445 fused_ordering(679) 00:13:32.445 fused_ordering(680) 00:13:32.445 fused_ordering(681) 00:13:32.445 fused_ordering(682) 00:13:32.445 fused_ordering(683) 00:13:32.445 fused_ordering(684) 00:13:32.445 fused_ordering(685) 00:13:32.445 fused_ordering(686) 00:13:32.445 fused_ordering(687) 00:13:32.445 fused_ordering(688) 00:13:32.445 fused_ordering(689) 00:13:32.445 fused_ordering(690) 00:13:32.445 fused_ordering(691) 00:13:32.445 fused_ordering(692) 00:13:32.445 fused_ordering(693) 00:13:32.445 fused_ordering(694) 00:13:32.445 fused_ordering(695) 00:13:32.445 fused_ordering(696) 00:13:32.445 fused_ordering(697) 00:13:32.445 fused_ordering(698) 00:13:32.445 fused_ordering(699) 00:13:32.445 fused_ordering(700) 00:13:32.445 fused_ordering(701) 00:13:32.445 fused_ordering(702) 00:13:32.445 fused_ordering(703) 00:13:32.445 fused_ordering(704) 00:13:32.445 fused_ordering(705) 00:13:32.445 fused_ordering(706) 00:13:32.445 fused_ordering(707) 00:13:32.445 fused_ordering(708) 00:13:32.445 fused_ordering(709) 00:13:32.445 fused_ordering(710) 00:13:32.445 fused_ordering(711) 00:13:32.445 fused_ordering(712) 00:13:32.445 fused_ordering(713) 00:13:32.445 fused_ordering(714) 00:13:32.445 fused_ordering(715) 00:13:32.445 fused_ordering(716) 00:13:32.445 fused_ordering(717) 00:13:32.445 fused_ordering(718) 00:13:32.445 fused_ordering(719) 00:13:32.445 fused_ordering(720) 00:13:32.445 fused_ordering(721) 00:13:32.445 fused_ordering(722) 00:13:32.445 fused_ordering(723) 00:13:32.445 fused_ordering(724) 00:13:32.445 fused_ordering(725) 00:13:32.445 fused_ordering(726) 00:13:32.445 fused_ordering(727) 00:13:32.445 fused_ordering(728) 00:13:32.445 fused_ordering(729) 00:13:32.445 fused_ordering(730) 00:13:32.445 fused_ordering(731) 00:13:32.445 fused_ordering(732) 00:13:32.445 fused_ordering(733) 00:13:32.445 fused_ordering(734) 00:13:32.445 fused_ordering(735) 00:13:32.445 fused_ordering(736) 00:13:32.445 fused_ordering(737) 00:13:32.445 fused_ordering(738) 00:13:32.445 fused_ordering(739) 00:13:32.445 fused_ordering(740) 00:13:32.445 fused_ordering(741) 00:13:32.445 fused_ordering(742) 00:13:32.445 fused_ordering(743) 00:13:32.445 fused_ordering(744) 00:13:32.445 fused_ordering(745) 00:13:32.445 fused_ordering(746) 00:13:32.445 fused_ordering(747) 00:13:32.445 fused_ordering(748) 00:13:32.445 fused_ordering(749) 00:13:32.445 fused_ordering(750) 00:13:32.445 fused_ordering(751) 00:13:32.445 fused_ordering(752) 00:13:32.445 fused_ordering(753) 00:13:32.445 fused_ordering(754) 00:13:32.445 fused_ordering(755) 00:13:32.445 fused_ordering(756) 00:13:32.445 fused_ordering(757) 00:13:32.445 fused_ordering(758) 00:13:32.445 fused_ordering(759) 00:13:32.445 fused_ordering(760) 00:13:32.445 fused_ordering(761) 00:13:32.445 fused_ordering(762) 00:13:32.445 fused_ordering(763) 00:13:32.445 fused_ordering(764) 00:13:32.445 fused_ordering(765) 00:13:32.445 fused_ordering(766) 00:13:32.445 fused_ordering(767) 00:13:32.445 fused_ordering(768) 00:13:32.445 fused_ordering(769) 00:13:32.445 fused_ordering(770) 00:13:32.445 fused_ordering(771) 00:13:32.445 fused_ordering(772) 00:13:32.445 fused_ordering(773) 00:13:32.445 fused_ordering(774) 00:13:32.445 fused_ordering(775) 00:13:32.445 fused_ordering(776) 00:13:32.445 fused_ordering(777) 00:13:32.445 fused_ordering(778) 00:13:32.445 fused_ordering(779) 00:13:32.445 fused_ordering(780) 00:13:32.445 fused_ordering(781) 00:13:32.445 fused_ordering(782) 00:13:32.445 fused_ordering(783) 00:13:32.445 fused_ordering(784) 00:13:32.445 fused_ordering(785) 00:13:32.445 fused_ordering(786) 00:13:32.445 fused_ordering(787) 00:13:32.445 fused_ordering(788) 00:13:32.445 fused_ordering(789) 00:13:32.445 fused_ordering(790) 00:13:32.445 fused_ordering(791) 00:13:32.445 fused_ordering(792) 00:13:32.445 fused_ordering(793) 00:13:32.445 fused_ordering(794) 00:13:32.445 fused_ordering(795) 00:13:32.446 fused_ordering(796) 00:13:32.446 fused_ordering(797) 00:13:32.446 fused_ordering(798) 00:13:32.446 fused_ordering(799) 00:13:32.446 fused_ordering(800) 00:13:32.446 fused_ordering(801) 00:13:32.446 fused_ordering(802) 00:13:32.446 fused_ordering(803) 00:13:32.446 fused_ordering(804) 00:13:32.446 fused_ordering(805) 00:13:32.446 fused_ordering(806) 00:13:32.446 fused_ordering(807) 00:13:32.446 fused_ordering(808) 00:13:32.446 fused_ordering(809) 00:13:32.446 fused_ordering(810) 00:13:32.446 fused_ordering(811) 00:13:32.446 fused_ordering(812) 00:13:32.446 fused_ordering(813) 00:13:32.446 fused_ordering(814) 00:13:32.446 fused_ordering(815) 00:13:32.446 fused_ordering(816) 00:13:32.446 fused_ordering(817) 00:13:32.446 fused_ordering(818) 00:13:32.446 fused_ordering(819) 00:13:32.446 fused_ordering(820) 00:13:33.016 fused_o[2024-10-08 18:29:26.776794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933570 is same with the state(6) to be set 00:13:33.016 rdering(821) 00:13:33.016 fused_ordering(822) 00:13:33.016 fused_ordering(823) 00:13:33.016 fused_ordering(824) 00:13:33.016 fused_ordering(825) 00:13:33.016 fused_ordering(826) 00:13:33.016 fused_ordering(827) 00:13:33.016 fused_ordering(828) 00:13:33.016 fused_ordering(829) 00:13:33.016 fused_ordering(830) 00:13:33.016 fused_ordering(831) 00:13:33.016 fused_ordering(832) 00:13:33.016 fused_ordering(833) 00:13:33.016 fused_ordering(834) 00:13:33.017 fused_ordering(835) 00:13:33.017 fused_ordering(836) 00:13:33.017 fused_ordering(837) 00:13:33.017 fused_ordering(838) 00:13:33.017 fused_ordering(839) 00:13:33.017 fused_ordering(840) 00:13:33.017 fused_ordering(841) 00:13:33.017 fused_ordering(842) 00:13:33.017 fused_ordering(843) 00:13:33.017 fused_ordering(844) 00:13:33.017 fused_ordering(845) 00:13:33.017 fused_ordering(846) 00:13:33.017 fused_ordering(847) 00:13:33.017 fused_ordering(848) 00:13:33.017 fused_ordering(849) 00:13:33.017 fused_ordering(850) 00:13:33.017 fused_ordering(851) 00:13:33.017 fused_ordering(852) 00:13:33.017 fused_ordering(853) 00:13:33.017 fused_ordering(854) 00:13:33.017 fused_ordering(855) 00:13:33.017 fused_ordering(856) 00:13:33.017 fused_ordering(857) 00:13:33.017 fused_ordering(858) 00:13:33.017 fused_ordering(859) 00:13:33.017 fused_ordering(860) 00:13:33.017 fused_ordering(861) 00:13:33.017 fused_ordering(862) 00:13:33.017 fused_ordering(863) 00:13:33.017 fused_ordering(864) 00:13:33.017 fused_ordering(865) 00:13:33.017 fused_ordering(866) 00:13:33.017 fused_ordering(867) 00:13:33.017 fused_ordering(868) 00:13:33.017 fused_ordering(869) 00:13:33.017 fused_ordering(870) 00:13:33.017 fused_ordering(871) 00:13:33.017 fused_ordering(872) 00:13:33.017 fused_ordering(873) 00:13:33.017 fused_ordering(874) 00:13:33.017 fused_ordering(875) 00:13:33.017 fused_ordering(876) 00:13:33.017 fused_ordering(877) 00:13:33.017 fused_ordering(878) 00:13:33.017 fused_ordering(879) 00:13:33.017 fused_ordering(880) 00:13:33.017 fused_ordering(881) 00:13:33.017 fused_ordering(882) 00:13:33.017 fused_ordering(883) 00:13:33.017 fused_ordering(884) 00:13:33.017 fused_ordering(885) 00:13:33.017 fused_ordering(886) 00:13:33.017 fused_ordering(887) 00:13:33.017 fused_ordering(888) 00:13:33.017 fused_ordering(889) 00:13:33.017 fused_ordering(890) 00:13:33.017 fused_ordering(891) 00:13:33.017 fused_ordering(892) 00:13:33.017 fused_ordering(893) 00:13:33.017 fused_ordering(894) 00:13:33.017 fused_ordering(895) 00:13:33.017 fused_ordering(896) 00:13:33.017 fused_ordering(897) 00:13:33.017 fused_ordering(898) 00:13:33.017 fused_ordering(899) 00:13:33.017 fused_ordering(900) 00:13:33.017 fused_ordering(901) 00:13:33.017 fused_ordering(902) 00:13:33.017 fused_ordering(903) 00:13:33.017 fused_ordering(904) 00:13:33.017 fused_ordering(905) 00:13:33.017 fused_ordering(906) 00:13:33.017 fused_ordering(907) 00:13:33.017 fused_ordering(908) 00:13:33.017 fused_ordering(909) 00:13:33.017 fused_ordering(910) 00:13:33.017 fused_ordering(911) 00:13:33.017 fused_ordering(912) 00:13:33.017 fused_ordering(913) 00:13:33.017 fused_ordering(914) 00:13:33.017 fused_ordering(915) 00:13:33.017 fused_ordering(916) 00:13:33.017 fused_ordering(917) 00:13:33.017 fused_ordering(918) 00:13:33.017 fused_ordering(919) 00:13:33.017 fused_ordering(920) 00:13:33.017 fused_ordering(921) 00:13:33.017 fused_ordering(922) 00:13:33.017 fused_ordering(923) 00:13:33.017 fused_ordering(924) 00:13:33.017 fused_ordering(925) 00:13:33.017 fused_ordering(926) 00:13:33.017 fused_ordering(927) 00:13:33.017 fused_ordering(928) 00:13:33.017 fused_ordering(929) 00:13:33.017 fused_ordering(930) 00:13:33.017 fused_ordering(931) 00:13:33.017 fused_ordering(932) 00:13:33.017 fused_ordering(933) 00:13:33.017 fused_ordering(934) 00:13:33.017 fused_ordering(935) 00:13:33.017 fused_ordering(936) 00:13:33.017 fused_ordering(937) 00:13:33.017 fused_ordering(938) 00:13:33.017 fused_ordering(939) 00:13:33.017 fused_ordering(940) 00:13:33.017 fused_ordering(941) 00:13:33.017 fused_ordering(942) 00:13:33.017 fused_ordering(943) 00:13:33.017 fused_ordering(944) 00:13:33.017 fused_ordering(945) 00:13:33.017 fused_ordering(946) 00:13:33.017 fused_ordering(947) 00:13:33.017 fused_ordering(948) 00:13:33.017 fused_ordering(949) 00:13:33.017 fused_ordering(950) 00:13:33.017 fused_ordering(951) 00:13:33.017 fused_ordering(952) 00:13:33.017 fused_ordering(953) 00:13:33.017 fused_ordering(954) 00:13:33.017 fused_ordering(955) 00:13:33.017 fused_ordering(956) 00:13:33.017 fused_ordering(957) 00:13:33.017 fused_ordering(958) 00:13:33.017 fused_ordering(959) 00:13:33.017 fused_ordering(960) 00:13:33.017 fused_ordering(961) 00:13:33.017 fused_ordering(962) 00:13:33.017 fused_ordering(963) 00:13:33.017 fused_ordering(964) 00:13:33.017 fused_ordering(965) 00:13:33.017 fused_ordering(966) 00:13:33.017 fused_ordering(967) 00:13:33.017 fused_ordering(968) 00:13:33.017 fused_ordering(969) 00:13:33.017 fused_ordering(970) 00:13:33.017 fused_ordering(971) 00:13:33.017 fused_ordering(972) 00:13:33.017 fused_ordering(973) 00:13:33.017 fused_ordering(974) 00:13:33.017 fused_ordering(975) 00:13:33.017 fused_ordering(976) 00:13:33.017 fused_ordering(977) 00:13:33.017 fused_ordering(978) 00:13:33.017 fused_ordering(979) 00:13:33.017 fused_ordering(980) 00:13:33.017 fused_ordering(981) 00:13:33.017 fused_ordering(982) 00:13:33.017 fused_ordering(983) 00:13:33.017 fused_ordering(984) 00:13:33.017 fused_ordering(985) 00:13:33.017 fused_ordering(986) 00:13:33.017 fused_ordering(987) 00:13:33.017 fused_ordering(988) 00:13:33.017 fused_ordering(989) 00:13:33.017 fused_ordering(990) 00:13:33.017 fused_ordering(991) 00:13:33.017 fused_ordering(992) 00:13:33.017 fused_ordering(993) 00:13:33.017 fused_ordering(994) 00:13:33.017 fused_ordering(995) 00:13:33.017 fused_ordering(996) 00:13:33.017 fused_ordering(997) 00:13:33.017 fused_ordering(998) 00:13:33.017 fused_ordering(999) 00:13:33.017 fused_ordering(1000) 00:13:33.017 fused_ordering(1001) 00:13:33.017 fused_ordering(1002) 00:13:33.017 fused_ordering(1003) 00:13:33.017 fused_ordering(1004) 00:13:33.017 fused_ordering(1005) 00:13:33.017 fused_ordering(1006) 00:13:33.017 fused_ordering(1007) 00:13:33.017 fused_ordering(1008) 00:13:33.017 fused_ordering(1009) 00:13:33.017 fused_ordering(1010) 00:13:33.017 fused_ordering(1011) 00:13:33.017 fused_ordering(1012) 00:13:33.017 fused_ordering(1013) 00:13:33.017 fused_ordering(1014) 00:13:33.017 fused_ordering(1015) 00:13:33.017 fused_ordering(1016) 00:13:33.017 fused_ordering(1017) 00:13:33.017 fused_ordering(1018) 00:13:33.017 fused_ordering(1019) 00:13:33.017 fused_ordering(1020) 00:13:33.017 fused_ordering(1021) 00:13:33.017 fused_ordering(1022) 00:13:33.017 fused_ordering(1023) 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.017 rmmod nvme_tcp 00:13:33.017 rmmod nvme_fabrics 00:13:33.017 rmmod nvme_keyring 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1160402 ']' 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1160402 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1160402 ']' 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1160402 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1160402 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1160402' 00:13:33.017 killing process with pid 1160402 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1160402 00:13:33.017 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1160402 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.017 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:35.569 00:13:35.569 real 0m13.487s 00:13:35.569 user 0m7.094s 00:13:35.569 sys 0m7.067s 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.569 ************************************ 00:13:35.569 END TEST nvmf_fused_ordering 00:13:35.569 ************************************ 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.569 ************************************ 00:13:35.569 START TEST nvmf_ns_masking 00:13:35.569 ************************************ 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:35.569 * Looking for test storage... 00:13:35.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:35.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.569 --rc genhtml_branch_coverage=1 00:13:35.569 --rc genhtml_function_coverage=1 00:13:35.569 --rc genhtml_legend=1 00:13:35.569 --rc geninfo_all_blocks=1 00:13:35.569 --rc geninfo_unexecuted_blocks=1 00:13:35.569 00:13:35.569 ' 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:35.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.569 --rc genhtml_branch_coverage=1 00:13:35.569 --rc genhtml_function_coverage=1 00:13:35.569 --rc genhtml_legend=1 00:13:35.569 --rc geninfo_all_blocks=1 00:13:35.569 --rc geninfo_unexecuted_blocks=1 00:13:35.569 00:13:35.569 ' 00:13:35.569 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:35.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.569 --rc genhtml_branch_coverage=1 00:13:35.569 --rc genhtml_function_coverage=1 00:13:35.569 --rc genhtml_legend=1 00:13:35.569 --rc geninfo_all_blocks=1 00:13:35.569 --rc geninfo_unexecuted_blocks=1 00:13:35.569 00:13:35.569 ' 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:35.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.570 --rc genhtml_branch_coverage=1 00:13:35.570 --rc genhtml_function_coverage=1 00:13:35.570 --rc genhtml_legend=1 00:13:35.570 --rc geninfo_all_blocks=1 00:13:35.570 --rc geninfo_unexecuted_blocks=1 00:13:35.570 00:13:35.570 ' 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=cf8d57d3-3208-4b46-b599-05080570d7ce 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c0b910a5-a643-4600-814a-20e851551a34 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1bf56b37-d476-4c20-aa31-7e1ba9cf9431 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:35.570 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.710 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.710 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:43.710 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:43.711 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:43.711 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:43.711 Found net devices under 0000:31:00.0: cvl_0_0 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:43.711 Found net devices under 0000:31:00.1: cvl_0_1 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.711 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:43.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:13:43.711 00:13:43.711 --- 10.0.0.2 ping statistics --- 00:13:43.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.711 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:13:43.711 00:13:43.711 --- 10.0.0.1 ping statistics --- 00:13:43.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.711 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:43.711 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1165490 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1165490 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1165490 ']' 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.712 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.712 [2024-10-08 18:29:37.260863] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:13:43.712 [2024-10-08 18:29:37.260931] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.712 [2024-10-08 18:29:37.348759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.712 [2024-10-08 18:29:37.442243] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.712 [2024-10-08 18:29:37.442304] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.712 [2024-10-08 18:29:37.442313] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.712 [2024-10-08 18:29:37.442320] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.712 [2024-10-08 18:29:37.442326] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.712 [2024-10-08 18:29:37.443156] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.283 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.283 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:44.283 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:44.283 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:44.283 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:44.283 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.283 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:44.283 [2024-10-08 18:29:38.285215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.283 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:44.283 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:44.283 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:44.543 Malloc1 00:13:44.543 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:44.803 Malloc2 00:13:44.803 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:45.064 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:45.064 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.325 [2024-10-08 18:29:39.257375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.325 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:45.325 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1bf56b37-d476-4c20-aa31-7e1ba9cf9431 -a 10.0.0.2 -s 4420 -i 4 00:13:45.586 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.586 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:45.586 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.586 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:45.586 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.500 [ 0]:0x1 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.500 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=26227b5c0cbb468bb13d65b58028e59f 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 26227b5c0cbb468bb13d65b58028e59f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.762 [ 0]:0x1 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=26227b5c0cbb468bb13d65b58028e59f 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 26227b5c0cbb468bb13d65b58028e59f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:47.762 [ 1]:0x2 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:47.762 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:48.022 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=76473e9cf1de4eb2ab546ad8a9b65402 00:13:48.022 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 76473e9cf1de4eb2ab546ad8a9b65402 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:48.022 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:48.022 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.283 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.283 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:48.545 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:48.545 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1bf56b37-d476-4c20-aa31-7e1ba9cf9431 -a 10.0.0.2 -s 4420 -i 4 00:13:48.545 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:48.545 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:48.545 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.545 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:48.545 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:48.545 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.088 [ 0]:0x2 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=76473e9cf1de4eb2ab546ad8a9b65402 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 76473e9cf1de4eb2ab546ad8a9b65402 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.088 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:51.088 [ 0]:0x1 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=26227b5c0cbb468bb13d65b58028e59f 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 26227b5c0cbb468bb13d65b58028e59f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:51.088 [ 1]:0x2 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:51.088 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.348 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=76473e9cf1de4eb2ab546ad8a9b65402 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 76473e9cf1de4eb2ab546ad8a9b65402 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:51.349 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.349 [ 0]:0x2 00:13:51.610 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:51.610 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.610 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=76473e9cf1de4eb2ab546ad8a9b65402 00:13:51.610 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 76473e9cf1de4eb2ab546ad8a9b65402 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.610 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:51.610 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.610 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:51.870 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:51.870 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1bf56b37-d476-4c20-aa31-7e1ba9cf9431 -a 10.0.0.2 -s 4420 -i 4 00:13:51.870 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:51.870 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:51.870 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.870 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:51.870 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:51.870 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:54.412 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:54.412 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:54.412 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.412 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:54.412 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.413 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:54.413 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:54.413 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:54.413 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:54.413 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:54.413 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:54.413 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.413 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:54.413 [ 0]:0x1 00:13:54.413 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.413 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=26227b5c0cbb468bb13d65b58028e59f 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 26227b5c0cbb468bb13d65b58028e59f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:54.413 [ 1]:0x2 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=76473e9cf1de4eb2ab546ad8a9b65402 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 76473e9cf1de4eb2ab546ad8a9b65402 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:54.413 [ 0]:0x2 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=76473e9cf1de4eb2ab546ad8a9b65402 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 76473e9cf1de4eb2ab546ad8a9b65402 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:54.413 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:54.675 [2024-10-08 18:29:48.546790] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:54.675 request: 00:13:54.675 { 00:13:54.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.675 "nsid": 2, 00:13:54.675 "host": "nqn.2016-06.io.spdk:host1", 00:13:54.675 "method": "nvmf_ns_remove_host", 00:13:54.675 "req_id": 1 00:13:54.675 } 00:13:54.675 Got JSON-RPC error response 00:13:54.675 response: 00:13:54.675 { 00:13:54.675 "code": -32602, 00:13:54.675 "message": "Invalid parameters" 00:13:54.675 } 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:54.675 [ 0]:0x2 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=76473e9cf1de4eb2ab546ad8a9b65402 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 76473e9cf1de4eb2ab546ad8a9b65402 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:54.675 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.936 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1167699 00:13:54.936 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.936 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:54.936 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1167699 /var/tmp/host.sock 00:13:54.936 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1167699 ']' 00:13:54.936 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:54.936 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.936 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:54.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:54.936 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.936 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:54.936 [2024-10-08 18:29:48.807346] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:13:54.936 [2024-10-08 18:29:48.807399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167699 ] 00:13:54.936 [2024-10-08 18:29:48.887529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.936 [2024-10-08 18:29:48.952212] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.878 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:55.879 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:55.879 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.879 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.139 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid cf8d57d3-3208-4b46-b599-05080570d7ce 00:13:56.139 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:13:56.139 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CF8D57D332084B46B59905080570D7CE -i 00:13:56.139 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c0b910a5-a643-4600-814a-20e851551a34 00:13:56.139 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:13:56.139 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C0B910A5A6434600814A20E851551A34 -i 00:13:56.399 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:56.660 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:56.661 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:56.661 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:56.921 nvme0n1 00:13:56.921 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:56.921 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:57.182 nvme1n2 00:13:57.182 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:57.182 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:57.182 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:57.182 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:57.182 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:57.443 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:57.443 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:57.443 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:57.443 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:57.704 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ cf8d57d3-3208-4b46-b599-05080570d7ce == \c\f\8\d\5\7\d\3\-\3\2\0\8\-\4\b\4\6\-\b\5\9\9\-\0\5\0\8\0\5\7\0\d\7\c\e ]] 00:13:57.704 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:57.704 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:57.704 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c0b910a5-a643-4600-814a-20e851551a34 == \c\0\b\9\1\0\a\5\-\a\6\4\3\-\4\6\0\0\-\8\1\4\a\-\2\0\e\8\5\1\5\5\1\a\3\4 ]] 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1167699 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1167699 ']' 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1167699 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1167699 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1167699' 00:13:57.964 killing process with pid 1167699 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1167699 00:13:57.964 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1167699 00:13:58.225 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.225 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:58.225 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:58.225 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:58.225 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:58.225 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.225 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:58.225 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.225 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.225 rmmod nvme_tcp 00:13:58.225 rmmod nvme_fabrics 00:13:58.486 rmmod nvme_keyring 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1165490 ']' 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1165490 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1165490 ']' 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1165490 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1165490 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1165490' 00:13:58.486 killing process with pid 1165490 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1165490 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1165490 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.486 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:01.033 00:14:01.033 real 0m25.389s 00:14:01.033 user 0m25.381s 00:14:01.033 sys 0m8.157s 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:01.033 ************************************ 00:14:01.033 END TEST nvmf_ns_masking 00:14:01.033 ************************************ 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.033 ************************************ 00:14:01.033 START TEST nvmf_nvme_cli 00:14:01.033 ************************************ 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:01.033 * Looking for test storage... 00:14:01.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:01.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.033 --rc genhtml_branch_coverage=1 00:14:01.033 --rc genhtml_function_coverage=1 00:14:01.033 --rc genhtml_legend=1 00:14:01.033 --rc geninfo_all_blocks=1 00:14:01.033 --rc geninfo_unexecuted_blocks=1 00:14:01.033 00:14:01.033 ' 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:01.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.033 --rc genhtml_branch_coverage=1 00:14:01.033 --rc genhtml_function_coverage=1 00:14:01.033 --rc genhtml_legend=1 00:14:01.033 --rc geninfo_all_blocks=1 00:14:01.033 --rc geninfo_unexecuted_blocks=1 00:14:01.033 00:14:01.033 ' 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:01.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.033 --rc genhtml_branch_coverage=1 00:14:01.033 --rc genhtml_function_coverage=1 00:14:01.033 --rc genhtml_legend=1 00:14:01.033 --rc geninfo_all_blocks=1 00:14:01.033 --rc geninfo_unexecuted_blocks=1 00:14:01.033 00:14:01.033 ' 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:01.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.033 --rc genhtml_branch_coverage=1 00:14:01.033 --rc genhtml_function_coverage=1 00:14:01.033 --rc genhtml_legend=1 00:14:01.033 --rc geninfo_all_blocks=1 00:14:01.033 --rc geninfo_unexecuted_blocks=1 00:14:01.033 00:14:01.033 ' 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.033 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:01.034 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.181 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:09.182 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:09.182 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:09.182 Found net devices under 0000:31:00.0: cvl_0_0 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:09.182 Found net devices under 0000:31:00.1: cvl_0_1 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:09.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:14:09.182 00:14:09.182 --- 10.0.0.2 ping statistics --- 00:14:09.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.182 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:09.182 00:14:09.182 --- 10.0.0.1 ping statistics --- 00:14:09.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.182 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1172885 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1172885 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1172885 ']' 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.182 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.182 [2024-10-08 18:30:02.657683] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:14:09.182 [2024-10-08 18:30:02.657749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.182 [2024-10-08 18:30:02.747259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.182 [2024-10-08 18:30:02.843371] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.183 [2024-10-08 18:30:02.843432] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.183 [2024-10-08 18:30:02.843440] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.183 [2024-10-08 18:30:02.843447] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.183 [2024-10-08 18:30:02.843454] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.183 [2024-10-08 18:30:02.845584] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.183 [2024-10-08 18:30:02.845745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.183 [2024-10-08 18:30:02.845904] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.183 [2024-10-08 18:30:02.845904] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.444 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.444 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:09.444 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:09.444 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:09.444 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.705 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.705 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 [2024-10-08 18:30:03.535968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 Malloc0 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 Malloc1 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 [2024-10-08 18:30:03.637274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.706 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:14:09.967 00:14:09.967 Discovery Log Number of Records 2, Generation counter 2 00:14:09.967 =====Discovery Log Entry 0====== 00:14:09.967 trtype: tcp 00:14:09.967 adrfam: ipv4 00:14:09.967 subtype: current discovery subsystem 00:14:09.967 treq: not required 00:14:09.967 portid: 0 00:14:09.967 trsvcid: 4420 00:14:09.967 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:09.967 traddr: 10.0.0.2 00:14:09.967 eflags: explicit discovery connections, duplicate discovery information 00:14:09.967 sectype: none 00:14:09.967 =====Discovery Log Entry 1====== 00:14:09.967 trtype: tcp 00:14:09.967 adrfam: ipv4 00:14:09.967 subtype: nvme subsystem 00:14:09.967 treq: not required 00:14:09.967 portid: 0 00:14:09.967 trsvcid: 4420 00:14:09.967 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:09.967 traddr: 10.0.0.2 00:14:09.967 eflags: none 00:14:09.967 sectype: none 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:09.967 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:11.881 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:11.881 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:11.881 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.881 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:11.881 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:11.881 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:13.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:13.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:13.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:13.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:13.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:13.983 /dev/nvme0n2 ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.983 rmmod nvme_tcp 00:14:13.983 rmmod nvme_fabrics 00:14:13.983 rmmod nvme_keyring 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1172885 ']' 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1172885 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1172885 ']' 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1172885 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1172885 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1172885' 00:14:13.983 killing process with pid 1172885 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1172885 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1172885 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.983 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:16.541 00:14:16.541 real 0m15.333s 00:14:16.541 user 0m22.530s 00:14:16.541 sys 0m6.561s 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:16.541 ************************************ 00:14:16.541 END TEST nvmf_nvme_cli 00:14:16.541 ************************************ 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.541 ************************************ 00:14:16.541 START TEST nvmf_vfio_user 00:14:16.541 ************************************ 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:16.541 * Looking for test storage... 00:14:16.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:16.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.541 --rc genhtml_branch_coverage=1 00:14:16.541 --rc genhtml_function_coverage=1 00:14:16.541 --rc genhtml_legend=1 00:14:16.541 --rc geninfo_all_blocks=1 00:14:16.541 --rc geninfo_unexecuted_blocks=1 00:14:16.541 00:14:16.541 ' 00:14:16.541 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:16.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.541 --rc genhtml_branch_coverage=1 00:14:16.541 --rc genhtml_function_coverage=1 00:14:16.541 --rc genhtml_legend=1 00:14:16.541 --rc geninfo_all_blocks=1 00:14:16.541 --rc geninfo_unexecuted_blocks=1 00:14:16.541 00:14:16.541 ' 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:16.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.542 --rc genhtml_branch_coverage=1 00:14:16.542 --rc genhtml_function_coverage=1 00:14:16.542 --rc genhtml_legend=1 00:14:16.542 --rc geninfo_all_blocks=1 00:14:16.542 --rc geninfo_unexecuted_blocks=1 00:14:16.542 00:14:16.542 ' 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:16.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.542 --rc genhtml_branch_coverage=1 00:14:16.542 --rc genhtml_function_coverage=1 00:14:16.542 --rc genhtml_legend=1 00:14:16.542 --rc geninfo_all_blocks=1 00:14:16.542 --rc geninfo_unexecuted_blocks=1 00:14:16.542 00:14:16.542 ' 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:16.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1174995 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1174995' 00:14:16.542 Process pid: 1174995 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1174995 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1174995 ']' 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.542 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:16.542 [2024-10-08 18:30:10.414549] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:14:16.542 [2024-10-08 18:30:10.414619] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.542 [2024-10-08 18:30:10.498953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.542 [2024-10-08 18:30:10.560053] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.542 [2024-10-08 18:30:10.560087] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.542 [2024-10-08 18:30:10.560093] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.542 [2024-10-08 18:30:10.560098] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.542 [2024-10-08 18:30:10.560102] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.542 [2024-10-08 18:30:10.561450] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.542 [2024-10-08 18:30:10.561602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.542 [2024-10-08 18:30:10.561753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.542 [2024-10-08 18:30:10.561754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.484 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.484 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:17.484 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:18.426 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:18.426 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:18.426 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:18.426 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:18.426 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:18.426 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:18.686 Malloc1 00:14:18.686 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:18.946 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:18.946 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:19.208 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:19.208 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:19.208 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:19.468 Malloc2 00:14:19.468 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:19.468 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:19.728 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:19.991 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:19.991 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:19.992 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:19.992 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:19.992 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:19.992 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:19.992 [2024-10-08 18:30:13.910787] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:14:19.992 [2024-10-08 18:30:13.910856] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175717 ] 00:14:19.992 [2024-10-08 18:30:13.937407] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:19.992 [2024-10-08 18:30:13.950243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:19.992 [2024-10-08 18:30:13.950261] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc9cf67a000 00:14:19.992 [2024-10-08 18:30:13.951245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:19.992 [2024-10-08 18:30:13.952244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:19.992 [2024-10-08 18:30:13.953252] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:19.992 [2024-10-08 18:30:13.954259] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:19.992 [2024-10-08 18:30:13.955257] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:19.992 [2024-10-08 18:30:13.956270] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:19.992 [2024-10-08 18:30:13.957277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:19.992 [2024-10-08 18:30:13.958281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:19.992 [2024-10-08 18:30:13.959287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:19.992 [2024-10-08 18:30:13.959293] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc9cf66f000 00:14:19.992 [2024-10-08 18:30:13.960206] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:19.992 [2024-10-08 18:30:13.969663] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:19.992 [2024-10-08 18:30:13.969683] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:19.992 [2024-10-08 18:30:13.974380] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:19.992 [2024-10-08 18:30:13.974410] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:19.992 [2024-10-08 18:30:13.974475] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:19.992 [2024-10-08 18:30:13.974489] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:19.992 [2024-10-08 18:30:13.974493] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:19.992 [2024-10-08 18:30:13.975379] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:19.992 [2024-10-08 18:30:13.975387] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:19.992 [2024-10-08 18:30:13.975392] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:19.992 [2024-10-08 18:30:13.976383] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:19.992 [2024-10-08 18:30:13.976393] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:19.992 [2024-10-08 18:30:13.976398] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:19.992 [2024-10-08 18:30:13.977388] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:19.992 [2024-10-08 18:30:13.977394] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:19.992 [2024-10-08 18:30:13.978391] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:19.992 [2024-10-08 18:30:13.978397] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:19.992 [2024-10-08 18:30:13.978401] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:19.992 [2024-10-08 18:30:13.978405] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:19.992 [2024-10-08 18:30:13.978509] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:19.992 [2024-10-08 18:30:13.978513] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:19.992 [2024-10-08 18:30:13.978516] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:19.992 [2024-10-08 18:30:13.979393] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:19.992 [2024-10-08 18:30:13.980398] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:19.992 [2024-10-08 18:30:13.981401] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:19.992 [2024-10-08 18:30:13.982406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:19.992 [2024-10-08 18:30:13.982466] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:19.992 [2024-10-08 18:30:13.983410] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:19.992 [2024-10-08 18:30:13.983416] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:19.992 [2024-10-08 18:30:13.983419] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:19.992 [2024-10-08 18:30:13.983433] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:19.992 [2024-10-08 18:30:13.983439] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:19.992 [2024-10-08 18:30:13.983448] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:19.992 [2024-10-08 18:30:13.983452] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:19.992 [2024-10-08 18:30:13.983455] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:19.992 [2024-10-08 18:30:13.983464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:19.992 [2024-10-08 18:30:13.983499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:19.992 [2024-10-08 18:30:13.983505] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:19.992 [2024-10-08 18:30:13.983509] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:19.992 [2024-10-08 18:30:13.983512] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:19.992 [2024-10-08 18:30:13.983515] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:19.992 [2024-10-08 18:30:13.983519] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:19.992 [2024-10-08 18:30:13.983522] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:19.992 [2024-10-08 18:30:13.983525] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:19.992 [2024-10-08 18:30:13.983532] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:19.992 [2024-10-08 18:30:13.983539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:19.992 [2024-10-08 18:30:13.983549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:19.992 [2024-10-08 18:30:13.983557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.992 [2024-10-08 18:30:13.983563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.992 [2024-10-08 18:30:13.983569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.992 [2024-10-08 18:30:13.983575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.992 [2024-10-08 18:30:13.983578] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:19.992 [2024-10-08 18:30:13.983584] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:19.992 [2024-10-08 18:30:13.983591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:19.992 [2024-10-08 18:30:13.983599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:19.992 [2024-10-08 18:30:13.983603] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:19.992 [2024-10-08 18:30:13.983606] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:19.992 [2024-10-08 18:30:13.983611] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:19.992 [2024-10-08 18:30:13.983616] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.983675] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983681] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983686] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:19.993 [2024-10-08 18:30:13.983689] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:19.993 [2024-10-08 18:30:13.983691] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:19.993 [2024-10-08 18:30:13.983695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.983715] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:19.993 [2024-10-08 18:30:13.983725] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983731] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983736] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:19.993 [2024-10-08 18:30:13.983739] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:19.993 [2024-10-08 18:30:13.983741] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:19.993 [2024-10-08 18:30:13.983745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.983771] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983776] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983781] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:19.993 [2024-10-08 18:30:13.983784] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:19.993 [2024-10-08 18:30:13.983786] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:19.993 [2024-10-08 18:30:13.983791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.983806] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983810] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983816] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983820] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983825] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983829] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983832] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:19.993 [2024-10-08 18:30:13.983835] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:19.993 [2024-10-08 18:30:13.983839] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:19.993 [2024-10-08 18:30:13.983853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.983870] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.983887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.983901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.983921] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:19.993 [2024-10-08 18:30:13.983924] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:19.993 [2024-10-08 18:30:13.983927] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:19.993 [2024-10-08 18:30:13.983929] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:19.993 [2024-10-08 18:30:13.983931] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:19.993 [2024-10-08 18:30:13.983936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:19.993 [2024-10-08 18:30:13.983941] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:19.993 [2024-10-08 18:30:13.983944] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:19.993 [2024-10-08 18:30:13.983947] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:19.993 [2024-10-08 18:30:13.983951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983956] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:19.993 [2024-10-08 18:30:13.983959] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:19.993 [2024-10-08 18:30:13.983961] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:19.993 [2024-10-08 18:30:13.983966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983972] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:19.993 [2024-10-08 18:30:13.983979] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:19.993 [2024-10-08 18:30:13.983981] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:19.993 [2024-10-08 18:30:13.983986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:19.993 [2024-10-08 18:30:13.983991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.983999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.984007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:19.993 [2024-10-08 18:30:13.984012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:19.993 ===================================================== 00:14:19.993 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:19.993 ===================================================== 00:14:19.993 Controller Capabilities/Features 00:14:19.993 ================================ 00:14:19.993 Vendor ID: 4e58 00:14:19.993 Subsystem Vendor ID: 4e58 00:14:19.993 Serial Number: SPDK1 00:14:19.993 Model Number: SPDK bdev Controller 00:14:19.993 Firmware Version: 25.01 00:14:19.993 Recommended Arb Burst: 6 00:14:19.993 IEEE OUI Identifier: 8d 6b 50 00:14:19.993 Multi-path I/O 00:14:19.993 May have multiple subsystem ports: Yes 00:14:19.993 May have multiple controllers: Yes 00:14:19.993 Associated with SR-IOV VF: No 00:14:19.993 Max Data Transfer Size: 131072 00:14:19.993 Max Number of Namespaces: 32 00:14:19.993 Max Number of I/O Queues: 127 00:14:19.993 NVMe Specification Version (VS): 1.3 00:14:19.993 NVMe Specification Version (Identify): 1.3 00:14:19.993 Maximum Queue Entries: 256 00:14:19.993 Contiguous Queues Required: Yes 00:14:19.993 Arbitration Mechanisms Supported 00:14:19.993 Weighted Round Robin: Not Supported 00:14:19.993 Vendor Specific: Not Supported 00:14:19.993 Reset Timeout: 15000 ms 00:14:19.993 Doorbell Stride: 4 bytes 00:14:19.993 NVM Subsystem Reset: Not Supported 00:14:19.993 Command Sets Supported 00:14:19.993 NVM Command Set: Supported 00:14:19.993 Boot Partition: Not Supported 00:14:19.993 Memory Page Size Minimum: 4096 bytes 00:14:19.993 Memory Page Size Maximum: 4096 bytes 00:14:19.993 Persistent Memory Region: Not Supported 00:14:19.993 Optional Asynchronous Events Supported 00:14:19.993 Namespace Attribute Notices: Supported 00:14:19.993 Firmware Activation Notices: Not Supported 00:14:19.993 ANA Change Notices: Not Supported 00:14:19.993 PLE Aggregate Log Change Notices: Not Supported 00:14:19.993 LBA Status Info Alert Notices: Not Supported 00:14:19.993 EGE Aggregate Log Change Notices: Not Supported 00:14:19.993 Normal NVM Subsystem Shutdown event: Not Supported 00:14:19.993 Zone Descriptor Change Notices: Not Supported 00:14:19.993 Discovery Log Change Notices: Not Supported 00:14:19.993 Controller Attributes 00:14:19.993 128-bit Host Identifier: Supported 00:14:19.993 Non-Operational Permissive Mode: Not Supported 00:14:19.993 NVM Sets: Not Supported 00:14:19.993 Read Recovery Levels: Not Supported 00:14:19.993 Endurance Groups: Not Supported 00:14:19.993 Predictable Latency Mode: Not Supported 00:14:19.993 Traffic Based Keep ALive: Not Supported 00:14:19.993 Namespace Granularity: Not Supported 00:14:19.993 SQ Associations: Not Supported 00:14:19.993 UUID List: Not Supported 00:14:19.993 Multi-Domain Subsystem: Not Supported 00:14:19.994 Fixed Capacity Management: Not Supported 00:14:19.994 Variable Capacity Management: Not Supported 00:14:19.994 Delete Endurance Group: Not Supported 00:14:19.994 Delete NVM Set: Not Supported 00:14:19.994 Extended LBA Formats Supported: Not Supported 00:14:19.994 Flexible Data Placement Supported: Not Supported 00:14:19.994 00:14:19.994 Controller Memory Buffer Support 00:14:19.994 ================================ 00:14:19.994 Supported: No 00:14:19.994 00:14:19.994 Persistent Memory Region Support 00:14:19.994 ================================ 00:14:19.994 Supported: No 00:14:19.994 00:14:19.994 Admin Command Set Attributes 00:14:19.994 ============================ 00:14:19.994 Security Send/Receive: Not Supported 00:14:19.994 Format NVM: Not Supported 00:14:19.994 Firmware Activate/Download: Not Supported 00:14:19.994 Namespace Management: Not Supported 00:14:19.994 Device Self-Test: Not Supported 00:14:19.994 Directives: Not Supported 00:14:19.994 NVMe-MI: Not Supported 00:14:19.994 Virtualization Management: Not Supported 00:14:19.994 Doorbell Buffer Config: Not Supported 00:14:19.994 Get LBA Status Capability: Not Supported 00:14:19.994 Command & Feature Lockdown Capability: Not Supported 00:14:19.994 Abort Command Limit: 4 00:14:19.994 Async Event Request Limit: 4 00:14:19.994 Number of Firmware Slots: N/A 00:14:19.994 Firmware Slot 1 Read-Only: N/A 00:14:19.994 Firmware Activation Without Reset: N/A 00:14:19.994 Multiple Update Detection Support: N/A 00:14:19.994 Firmware Update Granularity: No Information Provided 00:14:19.994 Per-Namespace SMART Log: No 00:14:19.994 Asymmetric Namespace Access Log Page: Not Supported 00:14:19.994 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:19.994 Command Effects Log Page: Supported 00:14:19.994 Get Log Page Extended Data: Supported 00:14:19.994 Telemetry Log Pages: Not Supported 00:14:19.994 Persistent Event Log Pages: Not Supported 00:14:19.994 Supported Log Pages Log Page: May Support 00:14:19.994 Commands Supported & Effects Log Page: Not Supported 00:14:19.994 Feature Identifiers & Effects Log Page:May Support 00:14:19.994 NVMe-MI Commands & Effects Log Page: May Support 00:14:19.994 Data Area 4 for Telemetry Log: Not Supported 00:14:19.994 Error Log Page Entries Supported: 128 00:14:19.994 Keep Alive: Supported 00:14:19.994 Keep Alive Granularity: 10000 ms 00:14:19.994 00:14:19.994 NVM Command Set Attributes 00:14:19.994 ========================== 00:14:19.994 Submission Queue Entry Size 00:14:19.994 Max: 64 00:14:19.994 Min: 64 00:14:19.994 Completion Queue Entry Size 00:14:19.994 Max: 16 00:14:19.994 Min: 16 00:14:19.994 Number of Namespaces: 32 00:14:19.994 Compare Command: Supported 00:14:19.994 Write Uncorrectable Command: Not Supported 00:14:19.994 Dataset Management Command: Supported 00:14:19.994 Write Zeroes Command: Supported 00:14:19.994 Set Features Save Field: Not Supported 00:14:19.994 Reservations: Not Supported 00:14:19.994 Timestamp: Not Supported 00:14:19.994 Copy: Supported 00:14:19.994 Volatile Write Cache: Present 00:14:19.994 Atomic Write Unit (Normal): 1 00:14:19.994 Atomic Write Unit (PFail): 1 00:14:19.994 Atomic Compare & Write Unit: 1 00:14:19.994 Fused Compare & Write: Supported 00:14:19.994 Scatter-Gather List 00:14:19.994 SGL Command Set: Supported (Dword aligned) 00:14:19.994 SGL Keyed: Not Supported 00:14:19.994 SGL Bit Bucket Descriptor: Not Supported 00:14:19.994 SGL Metadata Pointer: Not Supported 00:14:19.994 Oversized SGL: Not Supported 00:14:19.994 SGL Metadata Address: Not Supported 00:14:19.994 SGL Offset: Not Supported 00:14:19.994 Transport SGL Data Block: Not Supported 00:14:19.994 Replay Protected Memory Block: Not Supported 00:14:19.994 00:14:19.994 Firmware Slot Information 00:14:19.994 ========================= 00:14:19.994 Active slot: 1 00:14:19.994 Slot 1 Firmware Revision: 25.01 00:14:19.994 00:14:19.994 00:14:19.994 Commands Supported and Effects 00:14:19.994 ============================== 00:14:19.994 Admin Commands 00:14:19.994 -------------- 00:14:19.994 Get Log Page (02h): Supported 00:14:19.994 Identify (06h): Supported 00:14:19.994 Abort (08h): Supported 00:14:19.994 Set Features (09h): Supported 00:14:19.994 Get Features (0Ah): Supported 00:14:19.994 Asynchronous Event Request (0Ch): Supported 00:14:19.994 Keep Alive (18h): Supported 00:14:19.994 I/O Commands 00:14:19.994 ------------ 00:14:19.994 Flush (00h): Supported LBA-Change 00:14:19.994 Write (01h): Supported LBA-Change 00:14:19.994 Read (02h): Supported 00:14:19.994 Compare (05h): Supported 00:14:19.994 Write Zeroes (08h): Supported LBA-Change 00:14:19.994 Dataset Management (09h): Supported LBA-Change 00:14:19.994 Copy (19h): Supported LBA-Change 00:14:19.994 00:14:19.994 Error Log 00:14:19.994 ========= 00:14:19.994 00:14:19.994 Arbitration 00:14:19.994 =========== 00:14:19.994 Arbitration Burst: 1 00:14:19.994 00:14:19.994 Power Management 00:14:19.994 ================ 00:14:19.994 Number of Power States: 1 00:14:19.994 Current Power State: Power State #0 00:14:19.994 Power State #0: 00:14:19.994 Max Power: 0.00 W 00:14:19.994 Non-Operational State: Operational 00:14:19.994 Entry Latency: Not Reported 00:14:19.994 Exit Latency: Not Reported 00:14:19.994 Relative Read Throughput: 0 00:14:19.994 Relative Read Latency: 0 00:14:19.994 Relative Write Throughput: 0 00:14:19.994 Relative Write Latency: 0 00:14:19.994 Idle Power: Not Reported 00:14:19.994 Active Power: Not Reported 00:14:19.994 Non-Operational Permissive Mode: Not Supported 00:14:19.994 00:14:19.994 Health Information 00:14:19.994 ================== 00:14:19.994 Critical Warnings: 00:14:19.994 Available Spare Space: OK 00:14:19.994 Temperature: OK 00:14:19.994 Device Reliability: OK 00:14:19.994 Read Only: No 00:14:19.994 Volatile Memory Backup: OK 00:14:19.994 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:19.994 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:19.994 Available Spare: 0% 00:14:19.994 Available Sp[2024-10-08 18:30:13.984083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:19.994 [2024-10-08 18:30:13.984091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:19.994 [2024-10-08 18:30:13.984110] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:19.994 [2024-10-08 18:30:13.984117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.994 [2024-10-08 18:30:13.984122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.994 [2024-10-08 18:30:13.984126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.994 [2024-10-08 18:30:13.984130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.994 [2024-10-08 18:30:13.984420] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:19.994 [2024-10-08 18:30:13.984426] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:19.994 [2024-10-08 18:30:13.985420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:19.994 [2024-10-08 18:30:13.985458] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:19.994 [2024-10-08 18:30:13.985463] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:19.994 [2024-10-08 18:30:13.986426] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:19.994 [2024-10-08 18:30:13.986434] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:19.994 [2024-10-08 18:30:13.986487] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:19.994 [2024-10-08 18:30:13.988981] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:19.994 are Threshold: 0% 00:14:19.994 Life Percentage Used: 0% 00:14:19.994 Data Units Read: 0 00:14:19.994 Data Units Written: 0 00:14:19.994 Host Read Commands: 0 00:14:19.994 Host Write Commands: 0 00:14:19.994 Controller Busy Time: 0 minutes 00:14:19.995 Power Cycles: 0 00:14:19.995 Power On Hours: 0 hours 00:14:19.995 Unsafe Shutdowns: 0 00:14:19.995 Unrecoverable Media Errors: 0 00:14:19.995 Lifetime Error Log Entries: 0 00:14:19.995 Warning Temperature Time: 0 minutes 00:14:19.995 Critical Temperature Time: 0 minutes 00:14:19.995 00:14:19.995 Number of Queues 00:14:19.995 ================ 00:14:19.995 Number of I/O Submission Queues: 127 00:14:19.995 Number of I/O Completion Queues: 127 00:14:19.995 00:14:19.995 Active Namespaces 00:14:19.995 ================= 00:14:19.995 Namespace ID:1 00:14:19.995 Error Recovery Timeout: Unlimited 00:14:19.995 Command Set Identifier: NVM (00h) 00:14:19.995 Deallocate: Supported 00:14:19.995 Deallocated/Unwritten Error: Not Supported 00:14:19.995 Deallocated Read Value: Unknown 00:14:19.995 Deallocate in Write Zeroes: Not Supported 00:14:19.995 Deallocated Guard Field: 0xFFFF 00:14:19.995 Flush: Supported 00:14:19.995 Reservation: Supported 00:14:19.995 Namespace Sharing Capabilities: Multiple Controllers 00:14:19.995 Size (in LBAs): 131072 (0GiB) 00:14:19.995 Capacity (in LBAs): 131072 (0GiB) 00:14:19.995 Utilization (in LBAs): 131072 (0GiB) 00:14:19.995 NGUID: BE5E0A4EABC24377A724E22BC8C288F1 00:14:19.995 UUID: be5e0a4e-abc2-4377-a724-e22bc8c288f1 00:14:19.995 Thin Provisioning: Not Supported 00:14:19.995 Per-NS Atomic Units: Yes 00:14:19.995 Atomic Boundary Size (Normal): 0 00:14:19.995 Atomic Boundary Size (PFail): 0 00:14:19.995 Atomic Boundary Offset: 0 00:14:19.995 Maximum Single Source Range Length: 65535 00:14:19.995 Maximum Copy Length: 65535 00:14:19.995 Maximum Source Range Count: 1 00:14:19.995 NGUID/EUI64 Never Reused: No 00:14:19.995 Namespace Write Protected: No 00:14:19.995 Number of LBA Formats: 1 00:14:19.995 Current LBA Format: LBA Format #00 00:14:19.995 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:19.995 00:14:19.995 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:20.255 [2024-10-08 18:30:14.158592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:25.543 Initializing NVMe Controllers 00:14:25.543 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:25.543 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:25.543 Initialization complete. Launching workers. 00:14:25.543 ======================================================== 00:14:25.543 Latency(us) 00:14:25.543 Device Information : IOPS MiB/s Average min max 00:14:25.543 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39975.53 156.15 3201.82 849.17 8277.55 00:14:25.543 ======================================================== 00:14:25.543 Total : 39975.53 156.15 3201.82 849.17 8277.55 00:14:25.543 00:14:25.543 [2024-10-08 18:30:19.176066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:25.543 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:25.543 [2024-10-08 18:30:19.359917] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:30.830 Initializing NVMe Controllers 00:14:30.830 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:30.830 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:30.830 Initialization complete. Launching workers. 00:14:30.830 ======================================================== 00:14:30.830 Latency(us) 00:14:30.830 Device Information : IOPS MiB/s Average min max 00:14:30.830 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16059.13 62.73 7976.09 6177.10 8967.77 00:14:30.830 ======================================================== 00:14:30.830 Total : 16059.13 62.73 7976.09 6177.10 8967.77 00:14:30.830 00:14:30.831 [2024-10-08 18:30:24.399727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:30.831 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:30.831 [2024-10-08 18:30:24.589549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:36.113 [2024-10-08 18:30:29.651137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:36.113 Initializing NVMe Controllers 00:14:36.113 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:36.113 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:36.113 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:36.113 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:36.113 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:36.113 Initialization complete. Launching workers. 00:14:36.113 Starting thread on core 2 00:14:36.113 Starting thread on core 3 00:14:36.113 Starting thread on core 1 00:14:36.113 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:36.113 [2024-10-08 18:30:29.887311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:40.316 [2024-10-08 18:30:33.680186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:40.316 Initializing NVMe Controllers 00:14:40.316 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:40.316 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:40.316 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:40.316 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:40.316 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:40.316 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:40.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:40.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:40.316 Initialization complete. Launching workers. 00:14:40.316 Starting thread on core 1 with urgent priority queue 00:14:40.316 Starting thread on core 2 with urgent priority queue 00:14:40.316 Starting thread on core 3 with urgent priority queue 00:14:40.316 Starting thread on core 0 with urgent priority queue 00:14:40.316 SPDK bdev Controller (SPDK1 ) core 0: 13064.33 IO/s 7.65 secs/100000 ios 00:14:40.316 SPDK bdev Controller (SPDK1 ) core 1: 8438.33 IO/s 11.85 secs/100000 ios 00:14:40.316 SPDK bdev Controller (SPDK1 ) core 2: 12752.33 IO/s 7.84 secs/100000 ios 00:14:40.316 SPDK bdev Controller (SPDK1 ) core 3: 7574.33 IO/s 13.20 secs/100000 ios 00:14:40.316 ======================================================== 00:14:40.316 00:14:40.316 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:40.316 [2024-10-08 18:30:33.904682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:40.316 Initializing NVMe Controllers 00:14:40.316 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:40.316 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:40.316 Namespace ID: 1 size: 0GB 00:14:40.316 Initialization complete. 00:14:40.316 INFO: using host memory buffer for IO 00:14:40.316 Hello world! 00:14:40.316 [2024-10-08 18:30:33.940908] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:40.316 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:40.316 [2024-10-08 18:30:34.165380] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.261 Initializing NVMe Controllers 00:14:41.261 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.261 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.261 Initialization complete. Launching workers. 00:14:41.261 submit (in ns) avg, min, max = 5865.9, 2846.7, 3998820.0 00:14:41.261 complete (in ns) avg, min, max = 17112.9, 1640.0, 4994630.0 00:14:41.261 00:14:41.261 Submit histogram 00:14:41.261 ================ 00:14:41.261 Range in us Cumulative Count 00:14:41.261 2.840 - 2.853: 0.0633% ( 13) 00:14:41.261 2.853 - 2.867: 0.6432% ( 119) 00:14:41.261 2.867 - 2.880: 2.8507% ( 453) 00:14:41.261 2.880 - 2.893: 6.0280% ( 652) 00:14:41.261 2.893 - 2.907: 10.1603% ( 848) 00:14:41.261 2.907 - 2.920: 15.5597% ( 1108) 00:14:41.261 2.920 - 2.933: 21.5097% ( 1221) 00:14:41.261 2.933 - 2.947: 27.5864% ( 1247) 00:14:41.261 2.947 - 2.960: 33.5120% ( 1216) 00:14:41.261 2.960 - 2.973: 40.2271% ( 1378) 00:14:41.261 2.973 - 2.987: 46.8642% ( 1362) 00:14:41.261 2.987 - 3.000: 54.1543% ( 1496) 00:14:41.261 3.000 - 3.013: 62.0876% ( 1628) 00:14:41.261 3.013 - 3.027: 71.4975% ( 1931) 00:14:41.261 3.027 - 3.040: 80.4542% ( 1838) 00:14:41.261 3.040 - 3.053: 87.4665% ( 1439) 00:14:41.261 3.053 - 3.067: 92.1982% ( 971) 00:14:41.261 3.067 - 3.080: 95.1221% ( 600) 00:14:41.261 3.080 - 3.093: 97.1054% ( 407) 00:14:41.261 3.093 - 3.107: 98.4309% ( 272) 00:14:41.261 3.107 - 3.120: 99.1618% ( 150) 00:14:41.261 3.120 - 3.133: 99.5078% ( 71) 00:14:41.261 3.133 - 3.147: 99.5907% ( 17) 00:14:41.261 3.147 - 3.160: 99.6199% ( 6) 00:14:41.261 3.160 - 3.173: 99.6345% ( 3) 00:14:41.261 3.467 - 3.493: 99.6394% ( 1) 00:14:41.261 3.573 - 3.600: 99.6443% ( 1) 00:14:41.261 3.600 - 3.627: 99.6491% ( 1) 00:14:41.261 3.680 - 3.707: 99.6540% ( 1) 00:14:41.261 3.707 - 3.733: 99.6589% ( 1) 00:14:41.261 3.760 - 3.787: 99.6638% ( 1) 00:14:41.261 3.813 - 3.840: 99.6686% ( 1) 00:14:41.261 4.080 - 4.107: 99.6735% ( 1) 00:14:41.261 4.133 - 4.160: 99.6784% ( 1) 00:14:41.261 4.587 - 4.613: 99.6833% ( 1) 00:14:41.261 4.640 - 4.667: 99.6881% ( 1) 00:14:41.261 4.747 - 4.773: 99.6930% ( 1) 00:14:41.261 4.800 - 4.827: 99.6979% ( 1) 00:14:41.261 4.880 - 4.907: 99.7027% ( 1) 00:14:41.261 4.907 - 4.933: 99.7076% ( 1) 00:14:41.261 4.987 - 5.013: 99.7125% ( 1) 00:14:41.261 5.013 - 5.040: 99.7222% ( 2) 00:14:41.261 5.040 - 5.067: 99.7271% ( 1) 00:14:41.261 5.067 - 5.093: 99.7369% ( 2) 00:14:41.261 5.093 - 5.120: 99.7417% ( 1) 00:14:41.261 5.120 - 5.147: 99.7466% ( 1) 00:14:41.261 5.173 - 5.200: 99.7515% ( 1) 00:14:41.261 5.200 - 5.227: 99.7563% ( 1) 00:14:41.261 5.307 - 5.333: 99.7612% ( 1) 00:14:41.261 5.333 - 5.360: 99.7661% ( 1) 00:14:41.261 5.413 - 5.440: 99.7807% ( 3) 00:14:41.261 5.493 - 5.520: 99.7856% ( 1) 00:14:41.261 5.573 - 5.600: 99.7953% ( 2) 00:14:41.261 5.653 - 5.680: 99.8002% ( 1) 00:14:41.261 5.680 - 5.707: 99.8051% ( 1) 00:14:41.261 5.707 - 5.733: 99.8100% ( 1) 00:14:41.261 5.733 - 5.760: 99.8148% ( 1) 00:14:41.261 5.760 - 5.787: 99.8294% ( 3) 00:14:41.261 5.787 - 5.813: 99.8343% ( 1) 00:14:41.261 5.813 - 5.840: 99.8392% ( 1) 00:14:41.261 5.840 - 5.867: 99.8441% ( 1) 00:14:41.261 5.867 - 5.893: 99.8538% ( 2) 00:14:41.261 5.920 - 5.947: 99.8587% ( 1) 00:14:41.261 6.027 - 6.053: 99.8636% ( 1) 00:14:41.261 6.053 - 6.080: 99.8733% ( 2) 00:14:41.261 6.160 - 6.187: 99.8782% ( 1) 00:14:41.261 6.240 - 6.267: 99.8830% ( 1) 00:14:41.261 6.320 - 6.347: 99.8879% ( 1) 00:14:41.261 6.507 - 6.533: 99.8928% ( 1) 00:14:41.261 6.613 - 6.640: 99.8977% ( 1) 00:14:41.261 6.693 - 6.720: 99.9025% ( 1) 00:14:41.261 6.827 - 6.880: 99.9172% ( 3) 00:14:41.261 7.413 - 7.467: 99.9220% ( 1) 00:14:41.261 7.680 - 7.733: 99.9269% ( 1) 00:14:41.261 3017.387 - 3031.040: 99.9318% ( 1) 00:14:41.261 3986.773 - 4014.080: 100.0000% ( 14) 00:14:41.261 00:14:41.261 Complete histogram 00:14:41.261 ================== 00:14:41.261 Range in us Cumulative Count 00:14:41.261 1.640 - 1.647: 0.0049% ( 1) 00:14:41.261 1.647 - [2024-10-08 18:30:35.184871] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.261 1.653: 0.5994% ( 122) 00:14:41.261 1.653 - 1.660: 0.9161% ( 65) 00:14:41.261 1.660 - 1.667: 0.9600% ( 9) 00:14:41.261 1.667 - 1.673: 1.1500% ( 39) 00:14:41.261 1.673 - 1.680: 1.1939% ( 9) 00:14:41.261 1.680 - 1.687: 1.2426% ( 10) 00:14:41.261 1.693 - 1.700: 1.6422% ( 82) 00:14:41.261 1.700 - 1.707: 26.9041% ( 5184) 00:14:41.261 1.707 - 1.720: 48.9157% ( 4517) 00:14:41.261 1.720 - 1.733: 73.9389% ( 5135) 00:14:41.261 1.733 - 1.747: 82.4375% ( 1744) 00:14:41.261 1.747 - 1.760: 83.8799% ( 296) 00:14:41.261 1.760 - 1.773: 87.3739% ( 717) 00:14:41.261 1.773 - 1.787: 92.5442% ( 1061) 00:14:41.261 1.787 - 1.800: 96.8081% ( 875) 00:14:41.261 1.800 - 1.813: 98.7769% ( 404) 00:14:41.261 1.813 - 1.827: 99.3811% ( 124) 00:14:41.261 1.827 - 1.840: 99.4591% ( 16) 00:14:41.261 1.840 - 1.853: 99.4786% ( 4) 00:14:41.261 1.987 - 2.000: 99.4835% ( 1) 00:14:41.261 2.027 - 2.040: 99.4883% ( 1) 00:14:41.261 3.120 - 3.133: 99.4932% ( 1) 00:14:41.261 3.147 - 3.160: 99.4981% ( 1) 00:14:41.261 3.267 - 3.280: 99.5029% ( 1) 00:14:41.261 3.320 - 3.333: 99.5127% ( 2) 00:14:41.261 3.493 - 3.520: 99.5176% ( 1) 00:14:41.261 3.520 - 3.547: 99.5224% ( 1) 00:14:41.261 3.733 - 3.760: 99.5273% ( 1) 00:14:41.261 3.813 - 3.840: 99.5322% ( 1) 00:14:41.261 4.213 - 4.240: 99.5371% ( 1) 00:14:41.261 4.453 - 4.480: 99.5419% ( 1) 00:14:41.261 4.507 - 4.533: 99.5468% ( 1) 00:14:41.261 4.560 - 4.587: 99.5566% ( 2) 00:14:41.261 4.640 - 4.667: 99.5614% ( 1) 00:14:41.261 4.667 - 4.693: 99.5663% ( 1) 00:14:41.261 4.720 - 4.747: 99.5712% ( 1) 00:14:41.261 4.960 - 4.987: 99.5760% ( 1) 00:14:41.261 5.013 - 5.040: 99.5809% ( 1) 00:14:41.261 5.120 - 5.147: 99.5858% ( 1) 00:14:41.261 5.147 - 5.173: 99.5907% ( 1) 00:14:41.261 5.173 - 5.200: 99.5955% ( 1) 00:14:41.261 5.360 - 5.387: 99.6004% ( 1) 00:14:41.261 5.547 - 5.573: 99.6053% ( 1) 00:14:41.261 5.787 - 5.813: 99.6102% ( 1) 00:14:41.261 10.293 - 10.347: 99.6150% ( 1) 00:14:41.261 3126.613 - 3140.267: 99.6199% ( 1) 00:14:41.261 3986.773 - 4014.080: 99.9951% ( 77) 00:14:41.261 4969.813 - 4997.120: 100.0000% ( 1) 00:14:41.261 00:14:41.261 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:41.261 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:41.261 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:41.261 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:41.261 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:41.522 [ 00:14:41.522 { 00:14:41.522 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:41.522 "subtype": "Discovery", 00:14:41.522 "listen_addresses": [], 00:14:41.522 "allow_any_host": true, 00:14:41.522 "hosts": [] 00:14:41.522 }, 00:14:41.522 { 00:14:41.522 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:41.522 "subtype": "NVMe", 00:14:41.522 "listen_addresses": [ 00:14:41.522 { 00:14:41.522 "trtype": "VFIOUSER", 00:14:41.522 "adrfam": "IPv4", 00:14:41.522 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:41.522 "trsvcid": "0" 00:14:41.522 } 00:14:41.522 ], 00:14:41.522 "allow_any_host": true, 00:14:41.522 "hosts": [], 00:14:41.522 "serial_number": "SPDK1", 00:14:41.522 "model_number": "SPDK bdev Controller", 00:14:41.522 "max_namespaces": 32, 00:14:41.522 "min_cntlid": 1, 00:14:41.522 "max_cntlid": 65519, 00:14:41.522 "namespaces": [ 00:14:41.522 { 00:14:41.522 "nsid": 1, 00:14:41.522 "bdev_name": "Malloc1", 00:14:41.522 "name": "Malloc1", 00:14:41.522 "nguid": "BE5E0A4EABC24377A724E22BC8C288F1", 00:14:41.522 "uuid": "be5e0a4e-abc2-4377-a724-e22bc8c288f1" 00:14:41.522 } 00:14:41.522 ] 00:14:41.522 }, 00:14:41.522 { 00:14:41.522 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:41.522 "subtype": "NVMe", 00:14:41.522 "listen_addresses": [ 00:14:41.522 { 00:14:41.522 "trtype": "VFIOUSER", 00:14:41.522 "adrfam": "IPv4", 00:14:41.522 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:41.522 "trsvcid": "0" 00:14:41.522 } 00:14:41.522 ], 00:14:41.522 "allow_any_host": true, 00:14:41.522 "hosts": [], 00:14:41.522 "serial_number": "SPDK2", 00:14:41.522 "model_number": "SPDK bdev Controller", 00:14:41.522 "max_namespaces": 32, 00:14:41.522 "min_cntlid": 1, 00:14:41.522 "max_cntlid": 65519, 00:14:41.522 "namespaces": [ 00:14:41.522 { 00:14:41.522 "nsid": 1, 00:14:41.522 "bdev_name": "Malloc2", 00:14:41.522 "name": "Malloc2", 00:14:41.522 "nguid": "6D87E910423B4428922414795B2BD081", 00:14:41.522 "uuid": "6d87e910-423b-4428-9224-14795b2bd081" 00:14:41.522 } 00:14:41.522 ] 00:14:41.522 } 00:14:41.522 ] 00:14:41.522 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:41.522 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:41.522 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1179870 00:14:41.522 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:41.522 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:41.522 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:41.522 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:41.522 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:41.522 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:41.522 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:41.522 [2024-10-08 18:30:35.539381] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.783 Malloc3 00:14:41.783 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:41.783 [2024-10-08 18:30:35.752882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.783 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:41.783 Asynchronous Event Request test 00:14:41.783 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.783 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.783 Registering asynchronous event callbacks... 00:14:41.783 Starting namespace attribute notice tests for all controllers... 00:14:41.783 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:41.783 aer_cb - Changed Namespace 00:14:41.783 Cleaning up... 00:14:42.045 [ 00:14:42.045 { 00:14:42.045 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:42.045 "subtype": "Discovery", 00:14:42.045 "listen_addresses": [], 00:14:42.045 "allow_any_host": true, 00:14:42.045 "hosts": [] 00:14:42.045 }, 00:14:42.045 { 00:14:42.045 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:42.045 "subtype": "NVMe", 00:14:42.045 "listen_addresses": [ 00:14:42.045 { 00:14:42.045 "trtype": "VFIOUSER", 00:14:42.045 "adrfam": "IPv4", 00:14:42.045 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:42.045 "trsvcid": "0" 00:14:42.045 } 00:14:42.045 ], 00:14:42.045 "allow_any_host": true, 00:14:42.045 "hosts": [], 00:14:42.045 "serial_number": "SPDK1", 00:14:42.045 "model_number": "SPDK bdev Controller", 00:14:42.045 "max_namespaces": 32, 00:14:42.045 "min_cntlid": 1, 00:14:42.045 "max_cntlid": 65519, 00:14:42.045 "namespaces": [ 00:14:42.045 { 00:14:42.045 "nsid": 1, 00:14:42.045 "bdev_name": "Malloc1", 00:14:42.045 "name": "Malloc1", 00:14:42.045 "nguid": "BE5E0A4EABC24377A724E22BC8C288F1", 00:14:42.045 "uuid": "be5e0a4e-abc2-4377-a724-e22bc8c288f1" 00:14:42.045 }, 00:14:42.045 { 00:14:42.045 "nsid": 2, 00:14:42.045 "bdev_name": "Malloc3", 00:14:42.045 "name": "Malloc3", 00:14:42.045 "nguid": "D97097A1D3A84CCBA7AA3943959BF53C", 00:14:42.045 "uuid": "d97097a1-d3a8-4ccb-a7aa-3943959bf53c" 00:14:42.045 } 00:14:42.045 ] 00:14:42.045 }, 00:14:42.045 { 00:14:42.045 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:42.045 "subtype": "NVMe", 00:14:42.045 "listen_addresses": [ 00:14:42.045 { 00:14:42.045 "trtype": "VFIOUSER", 00:14:42.045 "adrfam": "IPv4", 00:14:42.045 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:42.045 "trsvcid": "0" 00:14:42.045 } 00:14:42.045 ], 00:14:42.045 "allow_any_host": true, 00:14:42.045 "hosts": [], 00:14:42.045 "serial_number": "SPDK2", 00:14:42.045 "model_number": "SPDK bdev Controller", 00:14:42.045 "max_namespaces": 32, 00:14:42.045 "min_cntlid": 1, 00:14:42.045 "max_cntlid": 65519, 00:14:42.045 "namespaces": [ 00:14:42.045 { 00:14:42.045 "nsid": 1, 00:14:42.045 "bdev_name": "Malloc2", 00:14:42.045 "name": "Malloc2", 00:14:42.045 "nguid": "6D87E910423B4428922414795B2BD081", 00:14:42.045 "uuid": "6d87e910-423b-4428-9224-14795b2bd081" 00:14:42.045 } 00:14:42.045 ] 00:14:42.045 } 00:14:42.045 ] 00:14:42.045 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1179870 00:14:42.045 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:42.045 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:42.045 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:42.045 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:42.045 [2024-10-08 18:30:35.984744] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:14:42.045 [2024-10-08 18:30:35.984814] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179910 ] 00:14:42.045 [2024-10-08 18:30:36.011504] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:42.045 [2024-10-08 18:30:36.020147] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:42.045 [2024-10-08 18:30:36.020167] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f88007b0000 00:14:42.045 [2024-10-08 18:30:36.021154] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:42.045 [2024-10-08 18:30:36.022159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:42.045 [2024-10-08 18:30:36.023170] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:42.046 [2024-10-08 18:30:36.024175] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:42.046 [2024-10-08 18:30:36.025183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:42.046 [2024-10-08 18:30:36.026192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:42.046 [2024-10-08 18:30:36.027199] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:42.046 [2024-10-08 18:30:36.028203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:42.046 [2024-10-08 18:30:36.029215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:42.046 [2024-10-08 18:30:36.029227] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f88007a5000 00:14:42.046 [2024-10-08 18:30:36.030139] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:42.046 [2024-10-08 18:30:36.039509] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:42.046 [2024-10-08 18:30:36.039528] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:42.046 [2024-10-08 18:30:36.044593] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:42.046 [2024-10-08 18:30:36.044628] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:42.046 [2024-10-08 18:30:36.044686] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:42.046 [2024-10-08 18:30:36.044700] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:42.046 [2024-10-08 18:30:36.044704] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:42.046 [2024-10-08 18:30:36.045600] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:42.046 [2024-10-08 18:30:36.045607] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:42.046 [2024-10-08 18:30:36.045613] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:42.046 [2024-10-08 18:30:36.046605] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:42.046 [2024-10-08 18:30:36.046612] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:42.046 [2024-10-08 18:30:36.046618] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:42.046 [2024-10-08 18:30:36.047617] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:42.046 [2024-10-08 18:30:36.047625] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:42.046 [2024-10-08 18:30:36.048617] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:42.046 [2024-10-08 18:30:36.048624] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:42.046 [2024-10-08 18:30:36.048627] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:42.046 [2024-10-08 18:30:36.048632] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:42.046 [2024-10-08 18:30:36.048736] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:42.046 [2024-10-08 18:30:36.048739] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:42.046 [2024-10-08 18:30:36.048743] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:42.046 [2024-10-08 18:30:36.049623] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:42.046 [2024-10-08 18:30:36.050629] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:42.046 [2024-10-08 18:30:36.051640] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:42.046 [2024-10-08 18:30:36.052644] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:42.046 [2024-10-08 18:30:36.052674] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:42.046 [2024-10-08 18:30:36.053650] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:42.046 [2024-10-08 18:30:36.053657] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:42.046 [2024-10-08 18:30:36.053660] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.053675] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:42.046 [2024-10-08 18:30:36.053681] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.053689] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:42.046 [2024-10-08 18:30:36.053692] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:42.046 [2024-10-08 18:30:36.053695] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:42.046 [2024-10-08 18:30:36.053704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:42.046 [2024-10-08 18:30:36.060981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:42.046 [2024-10-08 18:30:36.060990] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:42.046 [2024-10-08 18:30:36.060994] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:42.046 [2024-10-08 18:30:36.060997] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:42.046 [2024-10-08 18:30:36.061000] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:42.046 [2024-10-08 18:30:36.061003] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:42.046 [2024-10-08 18:30:36.061007] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:42.046 [2024-10-08 18:30:36.061010] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.061017] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.061025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:42.046 [2024-10-08 18:30:36.068978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:42.046 [2024-10-08 18:30:36.068988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.046 [2024-10-08 18:30:36.068994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.046 [2024-10-08 18:30:36.069003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.046 [2024-10-08 18:30:36.069010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.046 [2024-10-08 18:30:36.069013] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.069026] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.069033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:42.046 [2024-10-08 18:30:36.076980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:42.046 [2024-10-08 18:30:36.076986] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:42.046 [2024-10-08 18:30:36.076990] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.076995] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.077000] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.077007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:42.046 [2024-10-08 18:30:36.084979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:42.046 [2024-10-08 18:30:36.085023] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.085029] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.085035] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:42.046 [2024-10-08 18:30:36.085038] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:42.046 [2024-10-08 18:30:36.085040] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:42.046 [2024-10-08 18:30:36.085045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:42.046 [2024-10-08 18:30:36.092978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:42.046 [2024-10-08 18:30:36.092986] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:42.046 [2024-10-08 18:30:36.092995] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.093000] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:42.046 [2024-10-08 18:30:36.093005] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:42.046 [2024-10-08 18:30:36.093008] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:42.046 [2024-10-08 18:30:36.093011] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:42.046 [2024-10-08 18:30:36.093017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:42.046 [2024-10-08 18:30:36.100979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:42.046 [2024-10-08 18:30:36.100989] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:42.047 [2024-10-08 18:30:36.100995] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:42.047 [2024-10-08 18:30:36.101001] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:42.047 [2024-10-08 18:30:36.101004] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:42.047 [2024-10-08 18:30:36.101006] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:42.047 [2024-10-08 18:30:36.101010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:42.308 [2024-10-08 18:30:36.108979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:42.308 [2024-10-08 18:30:36.108988] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:42.309 [2024-10-08 18:30:36.108993] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:42.309 [2024-10-08 18:30:36.108999] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:42.309 [2024-10-08 18:30:36.109003] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:42.309 [2024-10-08 18:30:36.109007] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:42.309 [2024-10-08 18:30:36.109010] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:42.309 [2024-10-08 18:30:36.109014] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:42.309 [2024-10-08 18:30:36.109017] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:42.309 [2024-10-08 18:30:36.109020] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:42.309 [2024-10-08 18:30:36.109033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:42.309 [2024-10-08 18:30:36.116979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:42.309 [2024-10-08 18:30:36.116989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:42.309 [2024-10-08 18:30:36.124978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:42.309 [2024-10-08 18:30:36.124988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:42.309 [2024-10-08 18:30:36.132980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:42.309 [2024-10-08 18:30:36.132990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:42.309 [2024-10-08 18:30:36.140980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:42.309 [2024-10-08 18:30:36.140993] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:42.309 [2024-10-08 18:30:36.140996] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:42.309 [2024-10-08 18:30:36.140999] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:42.309 [2024-10-08 18:30:36.141002] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:42.309 [2024-10-08 18:30:36.141004] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:42.309 [2024-10-08 18:30:36.141009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:42.309 [2024-10-08 18:30:36.141014] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:42.309 [2024-10-08 18:30:36.141017] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:42.309 [2024-10-08 18:30:36.141020] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:42.309 [2024-10-08 18:30:36.141024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:42.309 [2024-10-08 18:30:36.141029] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:42.309 [2024-10-08 18:30:36.141032] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:42.309 [2024-10-08 18:30:36.141035] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:42.309 [2024-10-08 18:30:36.141039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:42.309 [2024-10-08 18:30:36.141044] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:42.309 [2024-10-08 18:30:36.141047] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:42.309 [2024-10-08 18:30:36.141050] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:42.309 [2024-10-08 18:30:36.141054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:42.309 [2024-10-08 18:30:36.148980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:42.309 [2024-10-08 18:30:36.148994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:42.309 [2024-10-08 18:30:36.149002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:42.309 [2024-10-08 18:30:36.149006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:42.309 ===================================================== 00:14:42.309 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:42.309 ===================================================== 00:14:42.309 Controller Capabilities/Features 00:14:42.309 ================================ 00:14:42.309 Vendor ID: 4e58 00:14:42.309 Subsystem Vendor ID: 4e58 00:14:42.309 Serial Number: SPDK2 00:14:42.309 Model Number: SPDK bdev Controller 00:14:42.309 Firmware Version: 25.01 00:14:42.309 Recommended Arb Burst: 6 00:14:42.309 IEEE OUI Identifier: 8d 6b 50 00:14:42.309 Multi-path I/O 00:14:42.309 May have multiple subsystem ports: Yes 00:14:42.309 May have multiple controllers: Yes 00:14:42.309 Associated with SR-IOV VF: No 00:14:42.309 Max Data Transfer Size: 131072 00:14:42.309 Max Number of Namespaces: 32 00:14:42.309 Max Number of I/O Queues: 127 00:14:42.309 NVMe Specification Version (VS): 1.3 00:14:42.309 NVMe Specification Version (Identify): 1.3 00:14:42.309 Maximum Queue Entries: 256 00:14:42.309 Contiguous Queues Required: Yes 00:14:42.309 Arbitration Mechanisms Supported 00:14:42.309 Weighted Round Robin: Not Supported 00:14:42.309 Vendor Specific: Not Supported 00:14:42.309 Reset Timeout: 15000 ms 00:14:42.309 Doorbell Stride: 4 bytes 00:14:42.309 NVM Subsystem Reset: Not Supported 00:14:42.309 Command Sets Supported 00:14:42.309 NVM Command Set: Supported 00:14:42.309 Boot Partition: Not Supported 00:14:42.309 Memory Page Size Minimum: 4096 bytes 00:14:42.309 Memory Page Size Maximum: 4096 bytes 00:14:42.309 Persistent Memory Region: Not Supported 00:14:42.309 Optional Asynchronous Events Supported 00:14:42.309 Namespace Attribute Notices: Supported 00:14:42.309 Firmware Activation Notices: Not Supported 00:14:42.309 ANA Change Notices: Not Supported 00:14:42.309 PLE Aggregate Log Change Notices: Not Supported 00:14:42.309 LBA Status Info Alert Notices: Not Supported 00:14:42.309 EGE Aggregate Log Change Notices: Not Supported 00:14:42.309 Normal NVM Subsystem Shutdown event: Not Supported 00:14:42.309 Zone Descriptor Change Notices: Not Supported 00:14:42.309 Discovery Log Change Notices: Not Supported 00:14:42.309 Controller Attributes 00:14:42.309 128-bit Host Identifier: Supported 00:14:42.309 Non-Operational Permissive Mode: Not Supported 00:14:42.309 NVM Sets: Not Supported 00:14:42.309 Read Recovery Levels: Not Supported 00:14:42.309 Endurance Groups: Not Supported 00:14:42.309 Predictable Latency Mode: Not Supported 00:14:42.309 Traffic Based Keep ALive: Not Supported 00:14:42.309 Namespace Granularity: Not Supported 00:14:42.309 SQ Associations: Not Supported 00:14:42.309 UUID List: Not Supported 00:14:42.309 Multi-Domain Subsystem: Not Supported 00:14:42.309 Fixed Capacity Management: Not Supported 00:14:42.309 Variable Capacity Management: Not Supported 00:14:42.309 Delete Endurance Group: Not Supported 00:14:42.309 Delete NVM Set: Not Supported 00:14:42.309 Extended LBA Formats Supported: Not Supported 00:14:42.309 Flexible Data Placement Supported: Not Supported 00:14:42.309 00:14:42.309 Controller Memory Buffer Support 00:14:42.309 ================================ 00:14:42.309 Supported: No 00:14:42.309 00:14:42.309 Persistent Memory Region Support 00:14:42.309 ================================ 00:14:42.309 Supported: No 00:14:42.309 00:14:42.309 Admin Command Set Attributes 00:14:42.309 ============================ 00:14:42.309 Security Send/Receive: Not Supported 00:14:42.309 Format NVM: Not Supported 00:14:42.309 Firmware Activate/Download: Not Supported 00:14:42.309 Namespace Management: Not Supported 00:14:42.309 Device Self-Test: Not Supported 00:14:42.309 Directives: Not Supported 00:14:42.309 NVMe-MI: Not Supported 00:14:42.309 Virtualization Management: Not Supported 00:14:42.309 Doorbell Buffer Config: Not Supported 00:14:42.309 Get LBA Status Capability: Not Supported 00:14:42.309 Command & Feature Lockdown Capability: Not Supported 00:14:42.309 Abort Command Limit: 4 00:14:42.309 Async Event Request Limit: 4 00:14:42.309 Number of Firmware Slots: N/A 00:14:42.309 Firmware Slot 1 Read-Only: N/A 00:14:42.309 Firmware Activation Without Reset: N/A 00:14:42.309 Multiple Update Detection Support: N/A 00:14:42.309 Firmware Update Granularity: No Information Provided 00:14:42.309 Per-Namespace SMART Log: No 00:14:42.309 Asymmetric Namespace Access Log Page: Not Supported 00:14:42.309 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:42.309 Command Effects Log Page: Supported 00:14:42.309 Get Log Page Extended Data: Supported 00:14:42.309 Telemetry Log Pages: Not Supported 00:14:42.309 Persistent Event Log Pages: Not Supported 00:14:42.309 Supported Log Pages Log Page: May Support 00:14:42.309 Commands Supported & Effects Log Page: Not Supported 00:14:42.309 Feature Identifiers & Effects Log Page:May Support 00:14:42.309 NVMe-MI Commands & Effects Log Page: May Support 00:14:42.309 Data Area 4 for Telemetry Log: Not Supported 00:14:42.309 Error Log Page Entries Supported: 128 00:14:42.309 Keep Alive: Supported 00:14:42.309 Keep Alive Granularity: 10000 ms 00:14:42.309 00:14:42.309 NVM Command Set Attributes 00:14:42.310 ========================== 00:14:42.310 Submission Queue Entry Size 00:14:42.310 Max: 64 00:14:42.310 Min: 64 00:14:42.310 Completion Queue Entry Size 00:14:42.310 Max: 16 00:14:42.310 Min: 16 00:14:42.310 Number of Namespaces: 32 00:14:42.310 Compare Command: Supported 00:14:42.310 Write Uncorrectable Command: Not Supported 00:14:42.310 Dataset Management Command: Supported 00:14:42.310 Write Zeroes Command: Supported 00:14:42.310 Set Features Save Field: Not Supported 00:14:42.310 Reservations: Not Supported 00:14:42.310 Timestamp: Not Supported 00:14:42.310 Copy: Supported 00:14:42.310 Volatile Write Cache: Present 00:14:42.310 Atomic Write Unit (Normal): 1 00:14:42.310 Atomic Write Unit (PFail): 1 00:14:42.310 Atomic Compare & Write Unit: 1 00:14:42.310 Fused Compare & Write: Supported 00:14:42.310 Scatter-Gather List 00:14:42.310 SGL Command Set: Supported (Dword aligned) 00:14:42.310 SGL Keyed: Not Supported 00:14:42.310 SGL Bit Bucket Descriptor: Not Supported 00:14:42.310 SGL Metadata Pointer: Not Supported 00:14:42.310 Oversized SGL: Not Supported 00:14:42.310 SGL Metadata Address: Not Supported 00:14:42.310 SGL Offset: Not Supported 00:14:42.310 Transport SGL Data Block: Not Supported 00:14:42.310 Replay Protected Memory Block: Not Supported 00:14:42.310 00:14:42.310 Firmware Slot Information 00:14:42.310 ========================= 00:14:42.310 Active slot: 1 00:14:42.310 Slot 1 Firmware Revision: 25.01 00:14:42.310 00:14:42.310 00:14:42.310 Commands Supported and Effects 00:14:42.310 ============================== 00:14:42.310 Admin Commands 00:14:42.310 -------------- 00:14:42.310 Get Log Page (02h): Supported 00:14:42.310 Identify (06h): Supported 00:14:42.310 Abort (08h): Supported 00:14:42.310 Set Features (09h): Supported 00:14:42.310 Get Features (0Ah): Supported 00:14:42.310 Asynchronous Event Request (0Ch): Supported 00:14:42.310 Keep Alive (18h): Supported 00:14:42.310 I/O Commands 00:14:42.310 ------------ 00:14:42.310 Flush (00h): Supported LBA-Change 00:14:42.310 Write (01h): Supported LBA-Change 00:14:42.310 Read (02h): Supported 00:14:42.310 Compare (05h): Supported 00:14:42.310 Write Zeroes (08h): Supported LBA-Change 00:14:42.310 Dataset Management (09h): Supported LBA-Change 00:14:42.310 Copy (19h): Supported LBA-Change 00:14:42.310 00:14:42.310 Error Log 00:14:42.310 ========= 00:14:42.310 00:14:42.310 Arbitration 00:14:42.310 =========== 00:14:42.310 Arbitration Burst: 1 00:14:42.310 00:14:42.310 Power Management 00:14:42.310 ================ 00:14:42.310 Number of Power States: 1 00:14:42.310 Current Power State: Power State #0 00:14:42.310 Power State #0: 00:14:42.310 Max Power: 0.00 W 00:14:42.310 Non-Operational State: Operational 00:14:42.310 Entry Latency: Not Reported 00:14:42.310 Exit Latency: Not Reported 00:14:42.310 Relative Read Throughput: 0 00:14:42.310 Relative Read Latency: 0 00:14:42.310 Relative Write Throughput: 0 00:14:42.310 Relative Write Latency: 0 00:14:42.310 Idle Power: Not Reported 00:14:42.310 Active Power: Not Reported 00:14:42.310 Non-Operational Permissive Mode: Not Supported 00:14:42.310 00:14:42.310 Health Information 00:14:42.310 ================== 00:14:42.310 Critical Warnings: 00:14:42.310 Available Spare Space: OK 00:14:42.310 Temperature: OK 00:14:42.310 Device Reliability: OK 00:14:42.310 Read Only: No 00:14:42.310 Volatile Memory Backup: OK 00:14:42.310 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:42.310 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:42.310 Available Spare: 0% 00:14:42.310 Available Sp[2024-10-08 18:30:36.149076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:42.310 [2024-10-08 18:30:36.156980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:42.310 [2024-10-08 18:30:36.157002] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:42.310 [2024-10-08 18:30:36.157009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.310 [2024-10-08 18:30:36.157014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.310 [2024-10-08 18:30:36.157018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.310 [2024-10-08 18:30:36.157024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.310 [2024-10-08 18:30:36.157061] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:42.310 [2024-10-08 18:30:36.157069] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:42.310 [2024-10-08 18:30:36.158070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:42.310 [2024-10-08 18:30:36.158107] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:42.310 [2024-10-08 18:30:36.158111] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:42.310 [2024-10-08 18:30:36.159079] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:42.310 [2024-10-08 18:30:36.159087] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:42.310 [2024-10-08 18:30:36.159132] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:42.310 [2024-10-08 18:30:36.160096] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:42.310 are Threshold: 0% 00:14:42.310 Life Percentage Used: 0% 00:14:42.310 Data Units Read: 0 00:14:42.310 Data Units Written: 0 00:14:42.310 Host Read Commands: 0 00:14:42.310 Host Write Commands: 0 00:14:42.310 Controller Busy Time: 0 minutes 00:14:42.310 Power Cycles: 0 00:14:42.310 Power On Hours: 0 hours 00:14:42.310 Unsafe Shutdowns: 0 00:14:42.310 Unrecoverable Media Errors: 0 00:14:42.310 Lifetime Error Log Entries: 0 00:14:42.310 Warning Temperature Time: 0 minutes 00:14:42.310 Critical Temperature Time: 0 minutes 00:14:42.310 00:14:42.310 Number of Queues 00:14:42.310 ================ 00:14:42.310 Number of I/O Submission Queues: 127 00:14:42.310 Number of I/O Completion Queues: 127 00:14:42.310 00:14:42.310 Active Namespaces 00:14:42.310 ================= 00:14:42.310 Namespace ID:1 00:14:42.310 Error Recovery Timeout: Unlimited 00:14:42.310 Command Set Identifier: NVM (00h) 00:14:42.310 Deallocate: Supported 00:14:42.310 Deallocated/Unwritten Error: Not Supported 00:14:42.310 Deallocated Read Value: Unknown 00:14:42.310 Deallocate in Write Zeroes: Not Supported 00:14:42.310 Deallocated Guard Field: 0xFFFF 00:14:42.310 Flush: Supported 00:14:42.310 Reservation: Supported 00:14:42.310 Namespace Sharing Capabilities: Multiple Controllers 00:14:42.310 Size (in LBAs): 131072 (0GiB) 00:14:42.310 Capacity (in LBAs): 131072 (0GiB) 00:14:42.310 Utilization (in LBAs): 131072 (0GiB) 00:14:42.310 NGUID: 6D87E910423B4428922414795B2BD081 00:14:42.310 UUID: 6d87e910-423b-4428-9224-14795b2bd081 00:14:42.310 Thin Provisioning: Not Supported 00:14:42.310 Per-NS Atomic Units: Yes 00:14:42.310 Atomic Boundary Size (Normal): 0 00:14:42.310 Atomic Boundary Size (PFail): 0 00:14:42.310 Atomic Boundary Offset: 0 00:14:42.310 Maximum Single Source Range Length: 65535 00:14:42.310 Maximum Copy Length: 65535 00:14:42.310 Maximum Source Range Count: 1 00:14:42.310 NGUID/EUI64 Never Reused: No 00:14:42.310 Namespace Write Protected: No 00:14:42.310 Number of LBA Formats: 1 00:14:42.310 Current LBA Format: LBA Format #00 00:14:42.310 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:42.310 00:14:42.310 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:42.310 [2024-10-08 18:30:36.339368] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.596 Initializing NVMe Controllers 00:14:47.596 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:47.596 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:47.596 Initialization complete. Launching workers. 00:14:47.596 ======================================================== 00:14:47.596 Latency(us) 00:14:47.596 Device Information : IOPS MiB/s Average min max 00:14:47.596 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40069.60 156.52 3196.82 842.69 9078.01 00:14:47.596 ======================================================== 00:14:47.596 Total : 40069.60 156.52 3196.82 842.69 9078.01 00:14:47.596 00:14:47.596 [2024-10-08 18:30:41.446175] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.596 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:47.596 [2024-10-08 18:30:41.618704] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:52.878 Initializing NVMe Controllers 00:14:52.878 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:52.878 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:52.878 Initialization complete. Launching workers. 00:14:52.878 ======================================================== 00:14:52.878 Latency(us) 00:14:52.878 Device Information : IOPS MiB/s Average min max 00:14:52.878 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40032.40 156.38 3200.19 851.80 8749.66 00:14:52.878 ======================================================== 00:14:52.878 Total : 40032.40 156.38 3200.19 851.80 8749.66 00:14:52.878 00:14:52.878 [2024-10-08 18:30:46.638803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:52.878 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:52.878 [2024-10-08 18:30:46.827965] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:58.159 [2024-10-08 18:30:51.967058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:58.159 Initializing NVMe Controllers 00:14:58.159 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:58.159 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:58.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:58.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:58.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:58.159 Initialization complete. Launching workers. 00:14:58.159 Starting thread on core 2 00:14:58.159 Starting thread on core 3 00:14:58.159 Starting thread on core 1 00:14:58.159 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:58.159 [2024-10-08 18:30:52.207357] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.469 [2024-10-08 18:30:55.271132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.469 Initializing NVMe Controllers 00:15:01.469 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.469 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.469 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:01.469 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:01.469 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:01.469 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:01.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:01.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:01.469 Initialization complete. Launching workers. 00:15:01.469 Starting thread on core 1 with urgent priority queue 00:15:01.469 Starting thread on core 2 with urgent priority queue 00:15:01.469 Starting thread on core 3 with urgent priority queue 00:15:01.469 Starting thread on core 0 with urgent priority queue 00:15:01.469 SPDK bdev Controller (SPDK2 ) core 0: 6810.33 IO/s 14.68 secs/100000 ios 00:15:01.469 SPDK bdev Controller (SPDK2 ) core 1: 5027.00 IO/s 19.89 secs/100000 ios 00:15:01.469 SPDK bdev Controller (SPDK2 ) core 2: 5209.00 IO/s 19.20 secs/100000 ios 00:15:01.469 SPDK bdev Controller (SPDK2 ) core 3: 5955.33 IO/s 16.79 secs/100000 ios 00:15:01.469 ======================================================== 00:15:01.469 00:15:01.469 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:01.469 [2024-10-08 18:30:55.502453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.469 Initializing NVMe Controllers 00:15:01.469 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.469 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.469 Namespace ID: 1 size: 0GB 00:15:01.469 Initialization complete. 00:15:01.469 INFO: using host memory buffer for IO 00:15:01.469 Hello world! 00:15:01.469 [2024-10-08 18:30:55.512501] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.730 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:01.730 [2024-10-08 18:30:55.732657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.112 Initializing NVMe Controllers 00:15:03.112 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.112 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.112 Initialization complete. Launching workers. 00:15:03.112 submit (in ns) avg, min, max = 4909.2, 2841.7, 3997983.3 00:15:03.112 complete (in ns) avg, min, max = 16856.0, 1625.8, 5990716.7 00:15:03.112 00:15:03.112 Submit histogram 00:15:03.112 ================ 00:15:03.112 Range in us Cumulative Count 00:15:03.112 2.840 - 2.853: 0.4898% ( 102) 00:15:03.112 2.853 - 2.867: 1.8969% ( 293) 00:15:03.112 2.867 - 2.880: 3.8515% ( 407) 00:15:03.112 2.880 - 2.893: 7.0307% ( 662) 00:15:03.112 2.893 - 2.907: 10.8054% ( 786) 00:15:03.112 2.907 - 2.920: 15.9967% ( 1081) 00:15:03.112 2.920 - 2.933: 21.9373% ( 1237) 00:15:03.112 2.933 - 2.947: 28.8767% ( 1445) 00:15:03.112 2.947 - 2.960: 34.8653% ( 1247) 00:15:03.112 2.960 - 2.973: 40.7050% ( 1216) 00:15:03.112 2.973 - 2.987: 46.7992% ( 1269) 00:15:03.112 2.987 - 3.000: 54.1517% ( 1531) 00:15:03.112 3.000 - 3.013: 63.3002% ( 1905) 00:15:03.112 3.013 - 3.027: 73.2747% ( 2077) 00:15:03.112 3.027 - 3.040: 81.6645% ( 1747) 00:15:03.112 3.040 - 3.053: 87.8548% ( 1289) 00:15:03.112 3.053 - 3.067: 92.4362% ( 954) 00:15:03.112 3.067 - 3.080: 95.2889% ( 594) 00:15:03.112 3.080 - 3.093: 97.4787% ( 456) 00:15:03.112 3.093 - 3.107: 98.6889% ( 252) 00:15:03.112 3.107 - 3.120: 99.2796% ( 123) 00:15:03.112 3.120 - 3.133: 99.5390% ( 54) 00:15:03.112 3.133 - 3.147: 99.6062% ( 14) 00:15:03.112 3.147 - 3.160: 99.6398% ( 7) 00:15:03.112 3.160 - 3.173: 99.6590% ( 4) 00:15:03.112 3.173 - 3.187: 99.6638% ( 1) 00:15:03.112 3.200 - 3.213: 99.6686% ( 1) 00:15:03.112 3.520 - 3.547: 99.6734% ( 1) 00:15:03.112 3.653 - 3.680: 99.6830% ( 2) 00:15:03.112 4.000 - 4.027: 99.6926% ( 2) 00:15:03.112 4.187 - 4.213: 99.6974% ( 1) 00:15:03.112 4.347 - 4.373: 99.7071% ( 2) 00:15:03.112 4.400 - 4.427: 99.7119% ( 1) 00:15:03.112 4.507 - 4.533: 99.7215% ( 2) 00:15:03.112 4.533 - 4.560: 99.7311% ( 2) 00:15:03.112 4.560 - 4.587: 99.7359% ( 1) 00:15:03.112 4.587 - 4.613: 99.7455% ( 2) 00:15:03.112 4.640 - 4.667: 99.7503% ( 1) 00:15:03.112 4.667 - 4.693: 99.7551% ( 1) 00:15:03.112 4.720 - 4.747: 99.7599% ( 1) 00:15:03.112 4.773 - 4.800: 99.7695% ( 2) 00:15:03.112 4.800 - 4.827: 99.7743% ( 1) 00:15:03.112 4.853 - 4.880: 99.7791% ( 1) 00:15:03.112 4.960 - 4.987: 99.7935% ( 3) 00:15:03.112 4.987 - 5.013: 99.8031% ( 2) 00:15:03.112 5.013 - 5.040: 99.8127% ( 2) 00:15:03.112 5.040 - 5.067: 99.8223% ( 2) 00:15:03.112 5.093 - 5.120: 99.8271% ( 1) 00:15:03.112 5.120 - 5.147: 99.8319% ( 1) 00:15:03.112 5.147 - 5.173: 99.8415% ( 2) 00:15:03.112 5.173 - 5.200: 99.8511% ( 2) 00:15:03.112 5.200 - 5.227: 99.8559% ( 1) 00:15:03.112 5.253 - 5.280: 99.8607% ( 1) 00:15:03.112 5.333 - 5.360: 99.8655% ( 1) 00:15:03.112 5.360 - 5.387: 99.8703% ( 1) 00:15:03.112 5.547 - 5.573: 99.8751% ( 1) 00:15:03.112 5.600 - 5.627: 99.8799% ( 1) 00:15:03.112 5.813 - 5.840: 99.8895% ( 2) 00:15:03.112 5.840 - 5.867: 99.8943% ( 1) 00:15:03.112 5.867 - 5.893: 99.8991% ( 1) 00:15:03.112 6.107 - 6.133: 99.9040% ( 1) 00:15:03.112 6.213 - 6.240: 99.9088% ( 1) 00:15:03.112 6.373 - 6.400: 99.9136% ( 1) 00:15:03.112 6.427 - 6.453: 99.9280% ( 3) 00:15:03.112 6.453 - 6.480: 99.9328% ( 1) 00:15:03.112 6.533 - 6.560: 99.9376% ( 1) 00:15:03.112 6.667 - 6.693: 99.9424% ( 1) 00:15:03.112 6.827 - 6.880: 99.9472% ( 1) 00:15:03.113 8.533 - 8.587: 99.9520% ( 1) 00:15:03.113 3986.773 - 4014.080: 100.0000% ( 10) 00:15:03.113 00:15:03.113 Complete histogram 00:15:03.113 ================== 00:15:03.113 Range in us Cumulative Count 00:15:03.113 1.620 - 1.627: 0.0048% ( 1) 00:15:03.113 1.627 - 1.633: 0.0144% ( 2) 00:15:03.113 1.633 - 1.640: 0.0192% ( 1) 00:15:03.113 1.640 - 1.647: 0.4802% ( 96) 00:15:03.113 1.647 - 1.653: 0.5907% ( 23) 00:15:03.113 1.653 - 1.660: 0.6675% ( 16) 00:15:03.113 1.660 - 1.667: 0.7636% ( 20) 00:15:03.113 1.667 - [2024-10-08 18:30:56.833568] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.113 1.673: 0.7972% ( 7) 00:15:03.113 1.673 - 1.680: 35.9074% ( 7311) 00:15:03.113 1.680 - 1.687: 43.9370% ( 1672) 00:15:03.113 1.687 - 1.693: 48.9795% ( 1050) 00:15:03.113 1.693 - 1.700: 69.5865% ( 4291) 00:15:03.113 1.700 - 1.707: 76.5596% ( 1452) 00:15:03.113 1.707 - 1.720: 82.3609% ( 1208) 00:15:03.113 1.720 - 1.733: 83.5759% ( 253) 00:15:03.113 1.733 - 1.747: 86.8799% ( 688) 00:15:03.113 1.747 - 1.760: 92.0857% ( 1084) 00:15:03.113 1.760 - 1.773: 96.5183% ( 923) 00:15:03.113 1.773 - 1.787: 98.6649% ( 447) 00:15:03.113 1.787 - 1.800: 99.3517% ( 143) 00:15:03.113 1.800 - 1.813: 99.4237% ( 15) 00:15:03.113 1.813 - 1.827: 99.4333% ( 2) 00:15:03.113 1.827 - 1.840: 99.4381% ( 1) 00:15:03.113 1.853 - 1.867: 99.4429% ( 1) 00:15:03.113 3.147 - 3.160: 99.4477% ( 1) 00:15:03.113 3.200 - 3.213: 99.4573% ( 2) 00:15:03.113 3.240 - 3.253: 99.4621% ( 1) 00:15:03.113 3.253 - 3.267: 99.4669% ( 1) 00:15:03.113 3.307 - 3.320: 99.4717% ( 1) 00:15:03.113 3.320 - 3.333: 99.4765% ( 1) 00:15:03.113 3.347 - 3.360: 99.4813% ( 1) 00:15:03.113 3.360 - 3.373: 99.4861% ( 1) 00:15:03.113 3.373 - 3.387: 99.4909% ( 1) 00:15:03.113 3.387 - 3.400: 99.4957% ( 1) 00:15:03.113 3.400 - 3.413: 99.5054% ( 2) 00:15:03.113 3.493 - 3.520: 99.5102% ( 1) 00:15:03.113 3.573 - 3.600: 99.5246% ( 3) 00:15:03.113 3.707 - 3.733: 99.5294% ( 1) 00:15:03.113 3.947 - 3.973: 99.5342% ( 1) 00:15:03.113 4.027 - 4.053: 99.5438% ( 2) 00:15:03.113 4.240 - 4.267: 99.5486% ( 1) 00:15:03.113 4.267 - 4.293: 99.5534% ( 1) 00:15:03.113 4.320 - 4.347: 99.5630% ( 2) 00:15:03.113 4.373 - 4.400: 99.5678% ( 1) 00:15:03.113 4.640 - 4.667: 99.5726% ( 1) 00:15:03.113 4.800 - 4.827: 99.5774% ( 1) 00:15:03.113 4.907 - 4.933: 99.5822% ( 1) 00:15:03.113 4.960 - 4.987: 99.5870% ( 1) 00:15:03.113 5.040 - 5.067: 99.5918% ( 1) 00:15:03.113 5.840 - 5.867: 99.5966% ( 1) 00:15:03.113 8.640 - 8.693: 99.6014% ( 1) 00:15:03.113 10.027 - 10.080: 99.6062% ( 1) 00:15:03.113 12.373 - 12.427: 99.6110% ( 1) 00:15:03.113 37.120 - 37.333: 99.6158% ( 1) 00:15:03.113 996.693 - 1003.520: 99.6206% ( 1) 00:15:03.113 2034.347 - 2048.000: 99.6254% ( 1) 00:15:03.113 2921.813 - 2935.467: 99.6302% ( 1) 00:15:03.113 3986.773 - 4014.080: 99.9952% ( 76) 00:15:03.113 5980.160 - 6007.467: 100.0000% ( 1) 00:15:03.113 00:15:03.113 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:03.113 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:03.113 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:03.113 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:03.113 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:03.113 [ 00:15:03.113 { 00:15:03.113 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:03.113 "subtype": "Discovery", 00:15:03.113 "listen_addresses": [], 00:15:03.113 "allow_any_host": true, 00:15:03.113 "hosts": [] 00:15:03.113 }, 00:15:03.113 { 00:15:03.113 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:03.113 "subtype": "NVMe", 00:15:03.113 "listen_addresses": [ 00:15:03.113 { 00:15:03.113 "trtype": "VFIOUSER", 00:15:03.113 "adrfam": "IPv4", 00:15:03.113 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:03.113 "trsvcid": "0" 00:15:03.113 } 00:15:03.113 ], 00:15:03.113 "allow_any_host": true, 00:15:03.113 "hosts": [], 00:15:03.113 "serial_number": "SPDK1", 00:15:03.113 "model_number": "SPDK bdev Controller", 00:15:03.113 "max_namespaces": 32, 00:15:03.113 "min_cntlid": 1, 00:15:03.113 "max_cntlid": 65519, 00:15:03.113 "namespaces": [ 00:15:03.113 { 00:15:03.113 "nsid": 1, 00:15:03.113 "bdev_name": "Malloc1", 00:15:03.113 "name": "Malloc1", 00:15:03.113 "nguid": "BE5E0A4EABC24377A724E22BC8C288F1", 00:15:03.113 "uuid": "be5e0a4e-abc2-4377-a724-e22bc8c288f1" 00:15:03.113 }, 00:15:03.113 { 00:15:03.113 "nsid": 2, 00:15:03.113 "bdev_name": "Malloc3", 00:15:03.113 "name": "Malloc3", 00:15:03.113 "nguid": "D97097A1D3A84CCBA7AA3943959BF53C", 00:15:03.113 "uuid": "d97097a1-d3a8-4ccb-a7aa-3943959bf53c" 00:15:03.113 } 00:15:03.113 ] 00:15:03.113 }, 00:15:03.113 { 00:15:03.113 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:03.113 "subtype": "NVMe", 00:15:03.113 "listen_addresses": [ 00:15:03.113 { 00:15:03.113 "trtype": "VFIOUSER", 00:15:03.113 "adrfam": "IPv4", 00:15:03.113 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:03.113 "trsvcid": "0" 00:15:03.113 } 00:15:03.113 ], 00:15:03.113 "allow_any_host": true, 00:15:03.113 "hosts": [], 00:15:03.113 "serial_number": "SPDK2", 00:15:03.113 "model_number": "SPDK bdev Controller", 00:15:03.113 "max_namespaces": 32, 00:15:03.113 "min_cntlid": 1, 00:15:03.113 "max_cntlid": 65519, 00:15:03.113 "namespaces": [ 00:15:03.113 { 00:15:03.113 "nsid": 1, 00:15:03.113 "bdev_name": "Malloc2", 00:15:03.113 "name": "Malloc2", 00:15:03.113 "nguid": "6D87E910423B4428922414795B2BD081", 00:15:03.113 "uuid": "6d87e910-423b-4428-9224-14795b2bd081" 00:15:03.113 } 00:15:03.113 ] 00:15:03.113 } 00:15:03.113 ] 00:15:03.113 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:03.113 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1184097 00:15:03.113 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:03.113 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:03.113 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:03.113 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:03.113 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:03.113 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:03.113 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:03.113 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:03.375 [2024-10-08 18:30:57.198666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.375 Malloc4 00:15:03.375 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:03.375 [2024-10-08 18:30:57.407936] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:03.636 Asynchronous Event Request test 00:15:03.636 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.636 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.636 Registering asynchronous event callbacks... 00:15:03.636 Starting namespace attribute notice tests for all controllers... 00:15:03.636 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:03.636 aer_cb - Changed Namespace 00:15:03.636 Cleaning up... 00:15:03.636 [ 00:15:03.636 { 00:15:03.636 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:03.636 "subtype": "Discovery", 00:15:03.636 "listen_addresses": [], 00:15:03.636 "allow_any_host": true, 00:15:03.636 "hosts": [] 00:15:03.636 }, 00:15:03.636 { 00:15:03.636 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:03.636 "subtype": "NVMe", 00:15:03.636 "listen_addresses": [ 00:15:03.636 { 00:15:03.636 "trtype": "VFIOUSER", 00:15:03.636 "adrfam": "IPv4", 00:15:03.636 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:03.636 "trsvcid": "0" 00:15:03.636 } 00:15:03.636 ], 00:15:03.636 "allow_any_host": true, 00:15:03.636 "hosts": [], 00:15:03.636 "serial_number": "SPDK1", 00:15:03.636 "model_number": "SPDK bdev Controller", 00:15:03.636 "max_namespaces": 32, 00:15:03.636 "min_cntlid": 1, 00:15:03.636 "max_cntlid": 65519, 00:15:03.636 "namespaces": [ 00:15:03.636 { 00:15:03.636 "nsid": 1, 00:15:03.636 "bdev_name": "Malloc1", 00:15:03.636 "name": "Malloc1", 00:15:03.636 "nguid": "BE5E0A4EABC24377A724E22BC8C288F1", 00:15:03.636 "uuid": "be5e0a4e-abc2-4377-a724-e22bc8c288f1" 00:15:03.636 }, 00:15:03.636 { 00:15:03.636 "nsid": 2, 00:15:03.636 "bdev_name": "Malloc3", 00:15:03.636 "name": "Malloc3", 00:15:03.636 "nguid": "D97097A1D3A84CCBA7AA3943959BF53C", 00:15:03.636 "uuid": "d97097a1-d3a8-4ccb-a7aa-3943959bf53c" 00:15:03.636 } 00:15:03.636 ] 00:15:03.636 }, 00:15:03.636 { 00:15:03.636 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:03.636 "subtype": "NVMe", 00:15:03.636 "listen_addresses": [ 00:15:03.636 { 00:15:03.636 "trtype": "VFIOUSER", 00:15:03.636 "adrfam": "IPv4", 00:15:03.636 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:03.636 "trsvcid": "0" 00:15:03.636 } 00:15:03.636 ], 00:15:03.636 "allow_any_host": true, 00:15:03.636 "hosts": [], 00:15:03.636 "serial_number": "SPDK2", 00:15:03.636 "model_number": "SPDK bdev Controller", 00:15:03.636 "max_namespaces": 32, 00:15:03.636 "min_cntlid": 1, 00:15:03.636 "max_cntlid": 65519, 00:15:03.636 "namespaces": [ 00:15:03.636 { 00:15:03.636 "nsid": 1, 00:15:03.636 "bdev_name": "Malloc2", 00:15:03.636 "name": "Malloc2", 00:15:03.636 "nguid": "6D87E910423B4428922414795B2BD081", 00:15:03.636 "uuid": "6d87e910-423b-4428-9224-14795b2bd081" 00:15:03.636 }, 00:15:03.636 { 00:15:03.636 "nsid": 2, 00:15:03.636 "bdev_name": "Malloc4", 00:15:03.636 "name": "Malloc4", 00:15:03.636 "nguid": "3C59D7F50EEE42FB86CAF4C682785B0E", 00:15:03.636 "uuid": "3c59d7f5-0eee-42fb-86ca-f4c682785b0e" 00:15:03.636 } 00:15:03.636 ] 00:15:03.636 } 00:15:03.636 ] 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1184097 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1174995 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1174995 ']' 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1174995 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1174995 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1174995' 00:15:03.636 killing process with pid 1174995 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1174995 00:15:03.636 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1174995 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1184250 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1184250' 00:15:03.897 Process pid: 1184250 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1184250 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1184250 ']' 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:03.897 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:03.897 [2024-10-08 18:30:57.911551] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:03.897 [2024-10-08 18:30:57.912487] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:15:03.897 [2024-10-08 18:30:57.912527] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.158 [2024-10-08 18:30:57.990376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.158 [2024-10-08 18:30:58.043548] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.158 [2024-10-08 18:30:58.043584] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.158 [2024-10-08 18:30:58.043593] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.158 [2024-10-08 18:30:58.043598] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.158 [2024-10-08 18:30:58.043602] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.158 [2024-10-08 18:30:58.044846] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.158 [2024-10-08 18:30:58.045015] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.158 [2024-10-08 18:30:58.045100] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.158 [2024-10-08 18:30:58.045101] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.158 [2024-10-08 18:30:58.108513] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:04.158 [2024-10-08 18:30:58.109680] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:04.158 [2024-10-08 18:30:58.110094] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:04.158 [2024-10-08 18:30:58.110633] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:04.158 [2024-10-08 18:30:58.110672] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:04.730 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:04.730 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:04.730 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:05.674 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:05.935 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:05.935 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:05.935 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:05.935 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:05.935 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:06.196 Malloc1 00:15:06.196 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:06.457 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:06.718 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:06.718 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:06.718 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:06.718 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:06.978 Malloc2 00:15:06.978 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:07.239 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:07.239 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1184250 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1184250 ']' 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1184250 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1184250 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1184250' 00:15:07.500 killing process with pid 1184250 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1184250 00:15:07.500 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1184250 00:15:07.761 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:07.761 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:07.761 00:15:07.761 real 0m51.547s 00:15:07.761 user 3m17.401s 00:15:07.761 sys 0m2.750s 00:15:07.761 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.761 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:07.761 ************************************ 00:15:07.761 END TEST nvmf_vfio_user 00:15:07.761 ************************************ 00:15:07.761 18:31:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:07.761 18:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:07.761 18:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.761 18:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.761 ************************************ 00:15:07.761 START TEST nvmf_vfio_user_nvme_compliance 00:15:07.761 ************************************ 00:15:07.761 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:08.023 * Looking for test storage... 00:15:08.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:08.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.023 --rc genhtml_branch_coverage=1 00:15:08.023 --rc genhtml_function_coverage=1 00:15:08.023 --rc genhtml_legend=1 00:15:08.023 --rc geninfo_all_blocks=1 00:15:08.023 --rc geninfo_unexecuted_blocks=1 00:15:08.023 00:15:08.023 ' 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:08.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.023 --rc genhtml_branch_coverage=1 00:15:08.023 --rc genhtml_function_coverage=1 00:15:08.023 --rc genhtml_legend=1 00:15:08.023 --rc geninfo_all_blocks=1 00:15:08.023 --rc geninfo_unexecuted_blocks=1 00:15:08.023 00:15:08.023 ' 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:08.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.023 --rc genhtml_branch_coverage=1 00:15:08.023 --rc genhtml_function_coverage=1 00:15:08.023 --rc genhtml_legend=1 00:15:08.023 --rc geninfo_all_blocks=1 00:15:08.023 --rc geninfo_unexecuted_blocks=1 00:15:08.023 00:15:08.023 ' 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:08.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.023 --rc genhtml_branch_coverage=1 00:15:08.023 --rc genhtml_function_coverage=1 00:15:08.023 --rc genhtml_legend=1 00:15:08.023 --rc geninfo_all_blocks=1 00:15:08.023 --rc geninfo_unexecuted_blocks=1 00:15:08.023 00:15:08.023 ' 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.023 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:08.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1185030 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1185030' 00:15:08.024 Process pid: 1185030 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1185030 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1185030 ']' 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.024 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.024 [2024-10-08 18:31:02.011597] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:15:08.024 [2024-10-08 18:31:02.011653] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.284 [2024-10-08 18:31:02.091986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:08.284 [2024-10-08 18:31:02.152818] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.284 [2024-10-08 18:31:02.152857] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.284 [2024-10-08 18:31:02.152863] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.284 [2024-10-08 18:31:02.152868] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.284 [2024-10-08 18:31:02.152872] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.284 [2024-10-08 18:31:02.153754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.284 [2024-10-08 18:31:02.153907] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.284 [2024-10-08 18:31:02.153908] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.855 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.855 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:08.855 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:09.795 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:09.795 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:09.795 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:09.795 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.795 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:09.795 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.795 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:09.795 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:09.795 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.795 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:10.057 malloc0 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.057 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:10.057 00:15:10.057 00:15:10.057 CUnit - A unit testing framework for C - Version 2.1-3 00:15:10.057 http://cunit.sourceforge.net/ 00:15:10.057 00:15:10.057 00:15:10.057 Suite: nvme_compliance 00:15:10.057 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-08 18:31:04.050407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.057 [2024-10-08 18:31:04.051684] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:10.057 [2024-10-08 18:31:04.051697] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:10.057 [2024-10-08 18:31:04.051701] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:10.057 [2024-10-08 18:31:04.053431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.057 passed 00:15:10.318 Test: admin_identify_ctrlr_verify_fused ...[2024-10-08 18:31:04.130922] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.318 [2024-10-08 18:31:04.135950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.318 passed 00:15:10.318 Test: admin_identify_ns ...[2024-10-08 18:31:04.211528] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.318 [2024-10-08 18:31:04.271985] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:10.318 [2024-10-08 18:31:04.279983] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:10.318 [2024-10-08 18:31:04.301059] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.318 passed 00:15:10.318 Test: admin_get_features_mandatory_features ...[2024-10-08 18:31:04.374245] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.577 [2024-10-08 18:31:04.377274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.577 passed 00:15:10.577 Test: admin_get_features_optional_features ...[2024-10-08 18:31:04.453756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.577 [2024-10-08 18:31:04.456771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.578 passed 00:15:10.578 Test: admin_set_features_number_of_queues ...[2024-10-08 18:31:04.532312] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.837 [2024-10-08 18:31:04.637068] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.837 passed 00:15:10.837 Test: admin_get_log_page_mandatory_logs ...[2024-10-08 18:31:04.713081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.837 [2024-10-08 18:31:04.716110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.837 passed 00:15:10.837 Test: admin_get_log_page_with_lpo ...[2024-10-08 18:31:04.791864] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.837 [2024-10-08 18:31:04.859984] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:10.837 [2024-10-08 18:31:04.873033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.098 passed 00:15:11.098 Test: fabric_property_get ...[2024-10-08 18:31:04.946234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:11.098 [2024-10-08 18:31:04.947441] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:11.098 [2024-10-08 18:31:04.949252] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.098 passed 00:15:11.098 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-08 18:31:05.025701] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:11.098 [2024-10-08 18:31:05.026905] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:11.098 [2024-10-08 18:31:05.028719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.098 passed 00:15:11.098 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-08 18:31:05.103330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:11.357 [2024-10-08 18:31:05.190984] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:11.357 [2024-10-08 18:31:05.206980] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:11.357 [2024-10-08 18:31:05.212059] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.357 passed 00:15:11.357 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-08 18:31:05.284311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:11.357 [2024-10-08 18:31:05.285505] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:11.358 [2024-10-08 18:31:05.287328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.358 passed 00:15:11.358 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-08 18:31:05.363055] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:11.619 [2024-10-08 18:31:05.439983] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:11.619 [2024-10-08 18:31:05.463983] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:11.619 [2024-10-08 18:31:05.469050] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.619 passed 00:15:11.619 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-08 18:31:05.545159] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:11.619 [2024-10-08 18:31:05.546358] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:11.619 [2024-10-08 18:31:05.546376] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:11.619 [2024-10-08 18:31:05.548179] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.619 passed 00:15:11.619 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-08 18:31:05.624920] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:11.879 [2024-10-08 18:31:05.717978] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:11.879 [2024-10-08 18:31:05.725980] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:11.879 [2024-10-08 18:31:05.733986] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:11.879 [2024-10-08 18:31:05.741984] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:11.879 [2024-10-08 18:31:05.771053] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.879 passed 00:15:11.879 Test: admin_create_io_sq_verify_pc ...[2024-10-08 18:31:05.842267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:11.879 [2024-10-08 18:31:05.860985] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:11.879 [2024-10-08 18:31:05.878441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.879 passed 00:15:12.139 Test: admin_create_io_qp_max_qps ...[2024-10-08 18:31:05.953907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.080 [2024-10-08 18:31:07.064985] nvme_ctrlr.c:5535:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:13.652 [2024-10-08 18:31:07.445458] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.652 passed 00:15:13.652 Test: admin_create_io_sq_shared_cq ...[2024-10-08 18:31:07.519332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.652 [2024-10-08 18:31:07.654984] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:13.652 [2024-10-08 18:31:07.692031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.914 passed 00:15:13.914 00:15:13.914 Run Summary: Type Total Ran Passed Failed Inactive 00:15:13.914 suites 1 1 n/a 0 0 00:15:13.914 tests 18 18 18 0 0 00:15:13.914 asserts 360 360 360 0 n/a 00:15:13.914 00:15:13.914 Elapsed time = 1.494 seconds 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1185030 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1185030 ']' 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1185030 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1185030 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1185030' 00:15:13.914 killing process with pid 1185030 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1185030 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1185030 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:13.914 00:15:13.914 real 0m6.205s 00:15:13.914 user 0m17.509s 00:15:13.914 sys 0m0.532s 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:13.914 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:13.914 ************************************ 00:15:13.914 END TEST nvmf_vfio_user_nvme_compliance 00:15:13.914 ************************************ 00:15:14.176 18:31:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:14.176 18:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:14.176 18:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.176 18:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:14.176 ************************************ 00:15:14.176 START TEST nvmf_vfio_user_fuzz 00:15:14.176 ************************************ 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:14.176 * Looking for test storage... 00:15:14.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:14.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.176 --rc genhtml_branch_coverage=1 00:15:14.176 --rc genhtml_function_coverage=1 00:15:14.176 --rc genhtml_legend=1 00:15:14.176 --rc geninfo_all_blocks=1 00:15:14.176 --rc geninfo_unexecuted_blocks=1 00:15:14.176 00:15:14.176 ' 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:14.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.176 --rc genhtml_branch_coverage=1 00:15:14.176 --rc genhtml_function_coverage=1 00:15:14.176 --rc genhtml_legend=1 00:15:14.176 --rc geninfo_all_blocks=1 00:15:14.176 --rc geninfo_unexecuted_blocks=1 00:15:14.176 00:15:14.176 ' 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:14.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.176 --rc genhtml_branch_coverage=1 00:15:14.176 --rc genhtml_function_coverage=1 00:15:14.176 --rc genhtml_legend=1 00:15:14.176 --rc geninfo_all_blocks=1 00:15:14.176 --rc geninfo_unexecuted_blocks=1 00:15:14.176 00:15:14.176 ' 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:14.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.176 --rc genhtml_branch_coverage=1 00:15:14.176 --rc genhtml_function_coverage=1 00:15:14.176 --rc genhtml_legend=1 00:15:14.176 --rc geninfo_all_blocks=1 00:15:14.176 --rc geninfo_unexecuted_blocks=1 00:15:14.176 00:15:14.176 ' 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.176 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.442 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1186408 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1186408' 00:15:14.443 Process pid: 1186408 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1186408 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1186408 ']' 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.443 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:15.386 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.386 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:15.386 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:16.326 malloc0 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:16.326 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.327 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:16.327 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.327 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:16.327 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:48.447 Fuzzing completed. Shutting down the fuzz application 00:15:48.447 00:15:48.447 Dumping successful admin opcodes: 00:15:48.447 8, 9, 10, 24, 00:15:48.447 Dumping successful io opcodes: 00:15:48.447 0, 00:15:48.447 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1343201, total successful commands: 5264, random_seed: 1904533632 00:15:48.447 NS: 0x200003a1ef00 admin qp, Total commands completed: 300305, total successful commands: 2416, random_seed: 589300800 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1186408 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1186408 ']' 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1186408 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1186408 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1186408' 00:15:48.447 killing process with pid 1186408 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1186408 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1186408 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:48.447 00:15:48.447 real 0m32.853s 00:15:48.447 user 0m38.357s 00:15:48.447 sys 0m23.612s 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.447 ************************************ 00:15:48.447 END TEST nvmf_vfio_user_fuzz 00:15:48.447 ************************************ 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.447 ************************************ 00:15:48.447 START TEST nvmf_auth_target 00:15:48.447 ************************************ 00:15:48.447 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:48.447 * Looking for test storage... 00:15:48.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:48.447 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:48.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.448 --rc genhtml_branch_coverage=1 00:15:48.448 --rc genhtml_function_coverage=1 00:15:48.448 --rc genhtml_legend=1 00:15:48.448 --rc geninfo_all_blocks=1 00:15:48.448 --rc geninfo_unexecuted_blocks=1 00:15:48.448 00:15:48.448 ' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:48.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.448 --rc genhtml_branch_coverage=1 00:15:48.448 --rc genhtml_function_coverage=1 00:15:48.448 --rc genhtml_legend=1 00:15:48.448 --rc geninfo_all_blocks=1 00:15:48.448 --rc geninfo_unexecuted_blocks=1 00:15:48.448 00:15:48.448 ' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:48.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.448 --rc genhtml_branch_coverage=1 00:15:48.448 --rc genhtml_function_coverage=1 00:15:48.448 --rc genhtml_legend=1 00:15:48.448 --rc geninfo_all_blocks=1 00:15:48.448 --rc geninfo_unexecuted_blocks=1 00:15:48.448 00:15:48.448 ' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:48.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.448 --rc genhtml_branch_coverage=1 00:15:48.448 --rc genhtml_function_coverage=1 00:15:48.448 --rc genhtml_legend=1 00:15:48.448 --rc geninfo_all_blocks=1 00:15:48.448 --rc geninfo_unexecuted_blocks=1 00:15:48.448 00:15:48.448 ' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:48.448 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:55.033 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:55.033 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:55.033 Found net devices under 0000:31:00.0: cvl_0_0 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:55.033 Found net devices under 0000:31:00.1: cvl_0_1 00:15:55.033 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:55.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:15:55.034 00:15:55.034 --- 10.0.0.2 ping statistics --- 00:15:55.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.034 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:15:55.034 00:15:55.034 --- 10.0.0.1 ping statistics --- 00:15:55.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.034 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1196461 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1196461 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1196461 ']' 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:55.034 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.976 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.976 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:55.976 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:55.976 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:55.976 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.976 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1196805 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b51bfd043de1e0544aed7128858f21a21b51bd3fff6f0949 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Fz0 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b51bfd043de1e0544aed7128858f21a21b51bd3fff6f0949 0 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b51bfd043de1e0544aed7128858f21a21b51bd3fff6f0949 0 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b51bfd043de1e0544aed7128858f21a21b51bd3fff6f0949 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Fz0 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Fz0 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Fz0 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=2df51dfc55dd8c9519bb3c0add920564872cb9f4c6da03fb7078000a791a2a35 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Jxx 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 2df51dfc55dd8c9519bb3c0add920564872cb9f4c6da03fb7078000a791a2a35 3 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 2df51dfc55dd8c9519bb3c0add920564872cb9f4c6da03fb7078000a791a2a35 3 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=2df51dfc55dd8c9519bb3c0add920564872cb9f4c6da03fb7078000a791a2a35 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Jxx 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Jxx 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Jxx 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=8934b1a51656568a757f623b7f41dbfd 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.IIy 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 8934b1a51656568a757f623b7f41dbfd 1 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 8934b1a51656568a757f623b7f41dbfd 1 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=8934b1a51656568a757f623b7f41dbfd 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.IIy 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.IIy 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.IIy 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c0cf9d3103c397e7934c08f9f6055c45f3941e1b704136de 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Qni 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c0cf9d3103c397e7934c08f9f6055c45f3941e1b704136de 2 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c0cf9d3103c397e7934c08f9f6055c45f3941e1b704136de 2 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c0cf9d3103c397e7934c08f9f6055c45f3941e1b704136de 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:15:55.977 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Qni 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Qni 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Qni 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cd860ad7bdb4884409af9423871cf7f86d5d6ae85edac539 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.2gf 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cd860ad7bdb4884409af9423871cf7f86d5d6ae85edac539 2 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cd860ad7bdb4884409af9423871cf7f86d5d6ae85edac539 2 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cd860ad7bdb4884409af9423871cf7f86d5d6ae85edac539 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.2gf 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.2gf 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.2gf 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=52cc094847beb5a37e9425efcd60092f 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.kbF 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 52cc094847beb5a37e9425efcd60092f 1 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 52cc094847beb5a37e9425efcd60092f 1 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=52cc094847beb5a37e9425efcd60092f 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.kbF 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.kbF 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.kbF 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=e3388f4466a03a73676ba8be57010dd46b0fee1775864e86a7fd96b70edff036 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.CCS 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key e3388f4466a03a73676ba8be57010dd46b0fee1775864e86a7fd96b70edff036 3 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 e3388f4466a03a73676ba8be57010dd46b0fee1775864e86a7fd96b70edff036 3 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=e3388f4466a03a73676ba8be57010dd46b0fee1775864e86a7fd96b70edff036 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.CCS 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.CCS 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.CCS 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1196461 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1196461 ']' 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.239 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.500 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.500 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:56.500 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1196805 /var/tmp/host.sock 00:15:56.500 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1196805 ']' 00:15:56.500 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:56.500 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.500 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:56.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:56.500 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.500 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Fz0 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Fz0 00:15:56.761 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Fz0 00:15:57.022 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Jxx ]] 00:15:57.022 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jxx 00:15:57.022 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.022 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.022 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.022 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jxx 00:15:57.022 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jxx 00:15:57.022 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:57.022 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IIy 00:15:57.022 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.022 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.022 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.022 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.IIy 00:15:57.022 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.IIy 00:15:57.283 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Qni ]] 00:15:57.283 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qni 00:15:57.283 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.283 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.283 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.283 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qni 00:15:57.283 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qni 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2gf 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.2gf 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.2gf 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.kbF ]] 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kbF 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kbF 00:15:57.543 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kbF 00:15:57.804 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:57.804 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.CCS 00:15:57.804 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.804 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.804 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.804 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.CCS 00:15:57.805 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.CCS 00:15:58.065 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:58.065 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:58.065 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.065 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.065 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:58.065 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.326 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.587 00:15:58.587 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.587 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.587 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.849 { 00:15:58.849 "cntlid": 1, 00:15:58.849 "qid": 0, 00:15:58.849 "state": "enabled", 00:15:58.849 "thread": "nvmf_tgt_poll_group_000", 00:15:58.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:15:58.849 "listen_address": { 00:15:58.849 "trtype": "TCP", 00:15:58.849 "adrfam": "IPv4", 00:15:58.849 "traddr": "10.0.0.2", 00:15:58.849 "trsvcid": "4420" 00:15:58.849 }, 00:15:58.849 "peer_address": { 00:15:58.849 "trtype": "TCP", 00:15:58.849 "adrfam": "IPv4", 00:15:58.849 "traddr": "10.0.0.1", 00:15:58.849 "trsvcid": "44124" 00:15:58.849 }, 00:15:58.849 "auth": { 00:15:58.849 "state": "completed", 00:15:58.849 "digest": "sha256", 00:15:58.849 "dhgroup": "null" 00:15:58.849 } 00:15:58.849 } 00:15:58.849 ]' 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.849 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.113 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:15:59.113 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:15:59.684 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.684 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:59.684 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.685 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.685 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.685 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.685 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:59.685 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.945 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.206 00:16:00.206 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.206 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.206 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.467 { 00:16:00.467 "cntlid": 3, 00:16:00.467 "qid": 0, 00:16:00.467 "state": "enabled", 00:16:00.467 "thread": "nvmf_tgt_poll_group_000", 00:16:00.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:00.467 "listen_address": { 00:16:00.467 "trtype": "TCP", 00:16:00.467 "adrfam": "IPv4", 00:16:00.467 "traddr": "10.0.0.2", 00:16:00.467 "trsvcid": "4420" 00:16:00.467 }, 00:16:00.467 "peer_address": { 00:16:00.467 "trtype": "TCP", 00:16:00.467 "adrfam": "IPv4", 00:16:00.467 "traddr": "10.0.0.1", 00:16:00.467 "trsvcid": "44156" 00:16:00.467 }, 00:16:00.467 "auth": { 00:16:00.467 "state": "completed", 00:16:00.467 "digest": "sha256", 00:16:00.467 "dhgroup": "null" 00:16:00.467 } 00:16:00.467 } 00:16:00.467 ]' 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.467 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.727 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:00.727 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:01.297 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.297 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:01.297 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.297 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.297 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.297 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.297 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:01.298 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.558 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.818 00:16:01.818 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.818 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.818 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.080 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.080 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.080 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.080 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.080 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.080 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.080 { 00:16:02.080 "cntlid": 5, 00:16:02.080 "qid": 0, 00:16:02.080 "state": "enabled", 00:16:02.080 "thread": "nvmf_tgt_poll_group_000", 00:16:02.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:02.080 "listen_address": { 00:16:02.080 "trtype": "TCP", 00:16:02.080 "adrfam": "IPv4", 00:16:02.080 "traddr": "10.0.0.2", 00:16:02.080 "trsvcid": "4420" 00:16:02.080 }, 00:16:02.080 "peer_address": { 00:16:02.080 "trtype": "TCP", 00:16:02.080 "adrfam": "IPv4", 00:16:02.080 "traddr": "10.0.0.1", 00:16:02.080 "trsvcid": "48548" 00:16:02.080 }, 00:16:02.080 "auth": { 00:16:02.080 "state": "completed", 00:16:02.080 "digest": "sha256", 00:16:02.080 "dhgroup": "null" 00:16:02.080 } 00:16:02.080 } 00:16:02.080 ]' 00:16:02.080 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.080 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.080 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.080 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:02.080 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.080 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.080 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.080 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.341 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:02.341 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:02.911 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.911 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:02.911 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.911 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.911 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.911 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.911 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.911 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.171 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.430 00:16:03.430 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.431 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.431 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.691 { 00:16:03.691 "cntlid": 7, 00:16:03.691 "qid": 0, 00:16:03.691 "state": "enabled", 00:16:03.691 "thread": "nvmf_tgt_poll_group_000", 00:16:03.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:03.691 "listen_address": { 00:16:03.691 "trtype": "TCP", 00:16:03.691 "adrfam": "IPv4", 00:16:03.691 "traddr": "10.0.0.2", 00:16:03.691 "trsvcid": "4420" 00:16:03.691 }, 00:16:03.691 "peer_address": { 00:16:03.691 "trtype": "TCP", 00:16:03.691 "adrfam": "IPv4", 00:16:03.691 "traddr": "10.0.0.1", 00:16:03.691 "trsvcid": "48564" 00:16:03.691 }, 00:16:03.691 "auth": { 00:16:03.691 "state": "completed", 00:16:03.691 "digest": "sha256", 00:16:03.691 "dhgroup": "null" 00:16:03.691 } 00:16:03.691 } 00:16:03.691 ]' 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.691 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.975 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:03.975 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:04.618 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.618 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:04.618 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.618 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.618 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.618 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.618 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.618 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:04.618 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.917 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.177 00:16:05.177 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.178 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.178 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.178 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.178 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.178 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.178 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.178 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.178 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.178 { 00:16:05.178 "cntlid": 9, 00:16:05.178 "qid": 0, 00:16:05.178 "state": "enabled", 00:16:05.178 "thread": "nvmf_tgt_poll_group_000", 00:16:05.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:05.178 "listen_address": { 00:16:05.178 "trtype": "TCP", 00:16:05.178 "adrfam": "IPv4", 00:16:05.178 "traddr": "10.0.0.2", 00:16:05.178 "trsvcid": "4420" 00:16:05.178 }, 00:16:05.178 "peer_address": { 00:16:05.178 "trtype": "TCP", 00:16:05.178 "adrfam": "IPv4", 00:16:05.178 "traddr": "10.0.0.1", 00:16:05.178 "trsvcid": "48602" 00:16:05.178 }, 00:16:05.178 "auth": { 00:16:05.178 "state": "completed", 00:16:05.178 "digest": "sha256", 00:16:05.178 "dhgroup": "ffdhe2048" 00:16:05.178 } 00:16:05.178 } 00:16:05.178 ]' 00:16:05.178 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.178 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.178 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.438 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.438 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.438 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.438 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.438 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.699 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:05.699 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:06.270 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.270 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:06.270 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.270 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.270 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.270 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.270 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:06.270 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.530 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.791 00:16:06.791 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.791 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.791 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.791 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.791 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.791 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.791 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.052 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.052 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.052 { 00:16:07.052 "cntlid": 11, 00:16:07.052 "qid": 0, 00:16:07.052 "state": "enabled", 00:16:07.052 "thread": "nvmf_tgt_poll_group_000", 00:16:07.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:07.052 "listen_address": { 00:16:07.052 "trtype": "TCP", 00:16:07.052 "adrfam": "IPv4", 00:16:07.052 "traddr": "10.0.0.2", 00:16:07.052 "trsvcid": "4420" 00:16:07.052 }, 00:16:07.052 "peer_address": { 00:16:07.052 "trtype": "TCP", 00:16:07.052 "adrfam": "IPv4", 00:16:07.052 "traddr": "10.0.0.1", 00:16:07.052 "trsvcid": "48630" 00:16:07.052 }, 00:16:07.052 "auth": { 00:16:07.052 "state": "completed", 00:16:07.052 "digest": "sha256", 00:16:07.052 "dhgroup": "ffdhe2048" 00:16:07.052 } 00:16:07.052 } 00:16:07.052 ]' 00:16:07.052 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.052 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.052 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.052 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.052 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.052 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.052 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.052 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.313 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:07.313 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:07.885 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.885 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:07.885 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.885 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.885 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.885 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.885 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.885 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.146 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.407 00:16:08.407 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.407 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.407 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.667 { 00:16:08.667 "cntlid": 13, 00:16:08.667 "qid": 0, 00:16:08.667 "state": "enabled", 00:16:08.667 "thread": "nvmf_tgt_poll_group_000", 00:16:08.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:08.667 "listen_address": { 00:16:08.667 "trtype": "TCP", 00:16:08.667 "adrfam": "IPv4", 00:16:08.667 "traddr": "10.0.0.2", 00:16:08.667 "trsvcid": "4420" 00:16:08.667 }, 00:16:08.667 "peer_address": { 00:16:08.667 "trtype": "TCP", 00:16:08.667 "adrfam": "IPv4", 00:16:08.667 "traddr": "10.0.0.1", 00:16:08.667 "trsvcid": "48674" 00:16:08.667 }, 00:16:08.667 "auth": { 00:16:08.667 "state": "completed", 00:16:08.667 "digest": "sha256", 00:16:08.667 "dhgroup": "ffdhe2048" 00:16:08.667 } 00:16:08.667 } 00:16:08.667 ]' 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.667 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.928 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:08.928 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:09.500 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.500 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:09.500 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.500 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.500 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.500 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.500 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.500 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.761 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.022 00:16:10.022 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.022 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.022 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.283 { 00:16:10.283 "cntlid": 15, 00:16:10.283 "qid": 0, 00:16:10.283 "state": "enabled", 00:16:10.283 "thread": "nvmf_tgt_poll_group_000", 00:16:10.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:10.283 "listen_address": { 00:16:10.283 "trtype": "TCP", 00:16:10.283 "adrfam": "IPv4", 00:16:10.283 "traddr": "10.0.0.2", 00:16:10.283 "trsvcid": "4420" 00:16:10.283 }, 00:16:10.283 "peer_address": { 00:16:10.283 "trtype": "TCP", 00:16:10.283 "adrfam": "IPv4", 00:16:10.283 "traddr": "10.0.0.1", 00:16:10.283 "trsvcid": "48700" 00:16:10.283 }, 00:16:10.283 "auth": { 00:16:10.283 "state": "completed", 00:16:10.283 "digest": "sha256", 00:16:10.283 "dhgroup": "ffdhe2048" 00:16:10.283 } 00:16:10.283 } 00:16:10.283 ]' 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.283 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.544 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:10.544 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:11.115 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.115 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:11.115 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.115 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.116 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.116 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.116 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.116 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.116 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.377 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.637 00:16:11.637 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.637 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.637 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.637 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.637 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.637 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.637 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.898 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.898 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.898 { 00:16:11.898 "cntlid": 17, 00:16:11.898 "qid": 0, 00:16:11.898 "state": "enabled", 00:16:11.898 "thread": "nvmf_tgt_poll_group_000", 00:16:11.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:11.898 "listen_address": { 00:16:11.898 "trtype": "TCP", 00:16:11.898 "adrfam": "IPv4", 00:16:11.898 "traddr": "10.0.0.2", 00:16:11.898 "trsvcid": "4420" 00:16:11.898 }, 00:16:11.898 "peer_address": { 00:16:11.898 "trtype": "TCP", 00:16:11.898 "adrfam": "IPv4", 00:16:11.898 "traddr": "10.0.0.1", 00:16:11.898 "trsvcid": "38966" 00:16:11.898 }, 00:16:11.898 "auth": { 00:16:11.898 "state": "completed", 00:16:11.898 "digest": "sha256", 00:16:11.898 "dhgroup": "ffdhe3072" 00:16:11.898 } 00:16:11.898 } 00:16:11.898 ]' 00:16:11.898 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.898 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.898 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.898 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.898 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.898 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.898 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.898 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.158 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:12.158 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:12.730 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.730 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:12.730 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.730 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.730 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.730 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.730 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:12.730 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.991 18:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.250 00:16:13.250 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.251 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.251 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.251 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.251 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.251 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.251 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.510 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.510 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.510 { 00:16:13.510 "cntlid": 19, 00:16:13.510 "qid": 0, 00:16:13.510 "state": "enabled", 00:16:13.510 "thread": "nvmf_tgt_poll_group_000", 00:16:13.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:13.510 "listen_address": { 00:16:13.510 "trtype": "TCP", 00:16:13.510 "adrfam": "IPv4", 00:16:13.510 "traddr": "10.0.0.2", 00:16:13.510 "trsvcid": "4420" 00:16:13.510 }, 00:16:13.510 "peer_address": { 00:16:13.510 "trtype": "TCP", 00:16:13.510 "adrfam": "IPv4", 00:16:13.510 "traddr": "10.0.0.1", 00:16:13.510 "trsvcid": "38984" 00:16:13.510 }, 00:16:13.510 "auth": { 00:16:13.510 "state": "completed", 00:16:13.510 "digest": "sha256", 00:16:13.510 "dhgroup": "ffdhe3072" 00:16:13.510 } 00:16:13.510 } 00:16:13.510 ]' 00:16:13.510 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.510 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.510 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.510 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.510 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.510 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.510 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.510 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.769 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:13.769 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:14.340 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.340 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:14.340 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.340 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.340 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.340 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.340 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:14.340 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.600 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.861 00:16:14.861 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.861 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.861 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.121 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.121 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.121 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.121 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.121 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.121 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.121 { 00:16:15.121 "cntlid": 21, 00:16:15.121 "qid": 0, 00:16:15.121 "state": "enabled", 00:16:15.121 "thread": "nvmf_tgt_poll_group_000", 00:16:15.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:15.121 "listen_address": { 00:16:15.121 "trtype": "TCP", 00:16:15.121 "adrfam": "IPv4", 00:16:15.121 "traddr": "10.0.0.2", 00:16:15.121 "trsvcid": "4420" 00:16:15.121 }, 00:16:15.121 "peer_address": { 00:16:15.121 "trtype": "TCP", 00:16:15.121 "adrfam": "IPv4", 00:16:15.121 "traddr": "10.0.0.1", 00:16:15.121 "trsvcid": "39002" 00:16:15.121 }, 00:16:15.121 "auth": { 00:16:15.121 "state": "completed", 00:16:15.121 "digest": "sha256", 00:16:15.121 "dhgroup": "ffdhe3072" 00:16:15.121 } 00:16:15.121 } 00:16:15.121 ]' 00:16:15.121 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.121 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.121 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.121 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.121 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.121 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.121 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.121 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.382 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:15.382 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:15.952 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.952 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:15.952 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.952 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.952 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.953 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.953 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:15.953 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.213 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.474 00:16:16.474 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.474 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.474 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.735 { 00:16:16.735 "cntlid": 23, 00:16:16.735 "qid": 0, 00:16:16.735 "state": "enabled", 00:16:16.735 "thread": "nvmf_tgt_poll_group_000", 00:16:16.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:16.735 "listen_address": { 00:16:16.735 "trtype": "TCP", 00:16:16.735 "adrfam": "IPv4", 00:16:16.735 "traddr": "10.0.0.2", 00:16:16.735 "trsvcid": "4420" 00:16:16.735 }, 00:16:16.735 "peer_address": { 00:16:16.735 "trtype": "TCP", 00:16:16.735 "adrfam": "IPv4", 00:16:16.735 "traddr": "10.0.0.1", 00:16:16.735 "trsvcid": "39028" 00:16:16.735 }, 00:16:16.735 "auth": { 00:16:16.735 "state": "completed", 00:16:16.735 "digest": "sha256", 00:16:16.735 "dhgroup": "ffdhe3072" 00:16:16.735 } 00:16:16.735 } 00:16:16.735 ]' 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.735 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.995 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:16.995 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:17.565 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.565 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:17.565 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.565 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.565 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.565 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.565 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.565 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.565 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.826 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.087 00:16:18.087 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.087 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.087 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.348 { 00:16:18.348 "cntlid": 25, 00:16:18.348 "qid": 0, 00:16:18.348 "state": "enabled", 00:16:18.348 "thread": "nvmf_tgt_poll_group_000", 00:16:18.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:18.348 "listen_address": { 00:16:18.348 "trtype": "TCP", 00:16:18.348 "adrfam": "IPv4", 00:16:18.348 "traddr": "10.0.0.2", 00:16:18.348 "trsvcid": "4420" 00:16:18.348 }, 00:16:18.348 "peer_address": { 00:16:18.348 "trtype": "TCP", 00:16:18.348 "adrfam": "IPv4", 00:16:18.348 "traddr": "10.0.0.1", 00:16:18.348 "trsvcid": "39056" 00:16:18.348 }, 00:16:18.348 "auth": { 00:16:18.348 "state": "completed", 00:16:18.348 "digest": "sha256", 00:16:18.348 "dhgroup": "ffdhe4096" 00:16:18.348 } 00:16:18.348 } 00:16:18.348 ]' 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.348 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.609 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:18.609 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:19.179 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.179 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:19.179 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.179 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.179 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.179 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.179 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:19.179 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.440 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.701 00:16:19.701 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.701 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.701 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.963 { 00:16:19.963 "cntlid": 27, 00:16:19.963 "qid": 0, 00:16:19.963 "state": "enabled", 00:16:19.963 "thread": "nvmf_tgt_poll_group_000", 00:16:19.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:19.963 "listen_address": { 00:16:19.963 "trtype": "TCP", 00:16:19.963 "adrfam": "IPv4", 00:16:19.963 "traddr": "10.0.0.2", 00:16:19.963 "trsvcid": "4420" 00:16:19.963 }, 00:16:19.963 "peer_address": { 00:16:19.963 "trtype": "TCP", 00:16:19.963 "adrfam": "IPv4", 00:16:19.963 "traddr": "10.0.0.1", 00:16:19.963 "trsvcid": "39082" 00:16:19.963 }, 00:16:19.963 "auth": { 00:16:19.963 "state": "completed", 00:16:19.963 "digest": "sha256", 00:16:19.963 "dhgroup": "ffdhe4096" 00:16:19.963 } 00:16:19.963 } 00:16:19.963 ]' 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.963 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.224 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:20.224 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:20.794 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.794 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:20.794 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.794 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.794 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.794 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.794 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.794 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.054 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.315 00:16:21.315 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.315 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.315 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.315 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.315 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.315 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.315 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.575 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.575 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.575 { 00:16:21.575 "cntlid": 29, 00:16:21.575 "qid": 0, 00:16:21.575 "state": "enabled", 00:16:21.575 "thread": "nvmf_tgt_poll_group_000", 00:16:21.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:21.575 "listen_address": { 00:16:21.575 "trtype": "TCP", 00:16:21.575 "adrfam": "IPv4", 00:16:21.575 "traddr": "10.0.0.2", 00:16:21.575 "trsvcid": "4420" 00:16:21.575 }, 00:16:21.575 "peer_address": { 00:16:21.575 "trtype": "TCP", 00:16:21.575 "adrfam": "IPv4", 00:16:21.575 "traddr": "10.0.0.1", 00:16:21.575 "trsvcid": "58416" 00:16:21.575 }, 00:16:21.575 "auth": { 00:16:21.575 "state": "completed", 00:16:21.575 "digest": "sha256", 00:16:21.575 "dhgroup": "ffdhe4096" 00:16:21.575 } 00:16:21.575 } 00:16:21.575 ]' 00:16:21.575 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.575 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.575 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.575 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.575 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.575 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.575 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.575 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.835 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:21.835 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:22.406 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.406 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:22.406 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.406 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.406 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.406 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.406 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:22.406 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.667 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.927 00:16:22.928 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.928 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.928 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.188 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.188 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.188 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.188 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.188 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.188 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.188 { 00:16:23.188 "cntlid": 31, 00:16:23.188 "qid": 0, 00:16:23.188 "state": "enabled", 00:16:23.188 "thread": "nvmf_tgt_poll_group_000", 00:16:23.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:23.188 "listen_address": { 00:16:23.188 "trtype": "TCP", 00:16:23.188 "adrfam": "IPv4", 00:16:23.188 "traddr": "10.0.0.2", 00:16:23.188 "trsvcid": "4420" 00:16:23.188 }, 00:16:23.188 "peer_address": { 00:16:23.188 "trtype": "TCP", 00:16:23.188 "adrfam": "IPv4", 00:16:23.188 "traddr": "10.0.0.1", 00:16:23.188 "trsvcid": "58452" 00:16:23.188 }, 00:16:23.188 "auth": { 00:16:23.188 "state": "completed", 00:16:23.188 "digest": "sha256", 00:16:23.188 "dhgroup": "ffdhe4096" 00:16:23.188 } 00:16:23.188 } 00:16:23.188 ]' 00:16:23.188 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.188 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.188 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.188 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.188 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.188 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.188 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.188 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.448 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:23.448 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:24.019 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.019 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:24.019 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.019 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.019 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.019 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.019 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.019 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.019 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.280 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.541 00:16:24.541 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.541 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.541 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.802 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.802 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.802 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.802 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.802 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.802 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.802 { 00:16:24.802 "cntlid": 33, 00:16:24.802 "qid": 0, 00:16:24.802 "state": "enabled", 00:16:24.802 "thread": "nvmf_tgt_poll_group_000", 00:16:24.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:24.802 "listen_address": { 00:16:24.802 "trtype": "TCP", 00:16:24.802 "adrfam": "IPv4", 00:16:24.802 "traddr": "10.0.0.2", 00:16:24.802 "trsvcid": "4420" 00:16:24.802 }, 00:16:24.802 "peer_address": { 00:16:24.802 "trtype": "TCP", 00:16:24.802 "adrfam": "IPv4", 00:16:24.803 "traddr": "10.0.0.1", 00:16:24.803 "trsvcid": "58458" 00:16:24.803 }, 00:16:24.803 "auth": { 00:16:24.803 "state": "completed", 00:16:24.803 "digest": "sha256", 00:16:24.803 "dhgroup": "ffdhe6144" 00:16:24.803 } 00:16:24.803 } 00:16:24.803 ]' 00:16:24.803 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.803 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.803 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.803 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.803 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.803 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.803 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.803 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.063 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:25.063 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:25.635 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.635 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:25.635 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.635 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.635 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.635 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.635 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:25.635 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.895 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.155 00:16:26.416 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.416 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.416 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.416 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.416 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.416 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.416 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.416 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.416 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.416 { 00:16:26.416 "cntlid": 35, 00:16:26.416 "qid": 0, 00:16:26.416 "state": "enabled", 00:16:26.416 "thread": "nvmf_tgt_poll_group_000", 00:16:26.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:26.416 "listen_address": { 00:16:26.416 "trtype": "TCP", 00:16:26.416 "adrfam": "IPv4", 00:16:26.416 "traddr": "10.0.0.2", 00:16:26.416 "trsvcid": "4420" 00:16:26.416 }, 00:16:26.416 "peer_address": { 00:16:26.416 "trtype": "TCP", 00:16:26.416 "adrfam": "IPv4", 00:16:26.416 "traddr": "10.0.0.1", 00:16:26.416 "trsvcid": "58494" 00:16:26.416 }, 00:16:26.416 "auth": { 00:16:26.416 "state": "completed", 00:16:26.416 "digest": "sha256", 00:16:26.416 "dhgroup": "ffdhe6144" 00:16:26.416 } 00:16:26.416 } 00:16:26.416 ]' 00:16:26.416 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.677 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.677 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.677 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.677 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.677 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.677 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.677 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.938 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:26.938 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:27.509 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.509 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:27.509 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.509 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.509 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.509 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.509 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:27.509 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:27.769 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:27.769 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.769 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.769 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:27.769 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.769 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.770 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.770 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.770 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.770 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.770 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.770 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.770 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.030 00:16:28.030 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.030 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.030 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.291 { 00:16:28.291 "cntlid": 37, 00:16:28.291 "qid": 0, 00:16:28.291 "state": "enabled", 00:16:28.291 "thread": "nvmf_tgt_poll_group_000", 00:16:28.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:28.291 "listen_address": { 00:16:28.291 "trtype": "TCP", 00:16:28.291 "adrfam": "IPv4", 00:16:28.291 "traddr": "10.0.0.2", 00:16:28.291 "trsvcid": "4420" 00:16:28.291 }, 00:16:28.291 "peer_address": { 00:16:28.291 "trtype": "TCP", 00:16:28.291 "adrfam": "IPv4", 00:16:28.291 "traddr": "10.0.0.1", 00:16:28.291 "trsvcid": "58508" 00:16:28.291 }, 00:16:28.291 "auth": { 00:16:28.291 "state": "completed", 00:16:28.291 "digest": "sha256", 00:16:28.291 "dhgroup": "ffdhe6144" 00:16:28.291 } 00:16:28.291 } 00:16:28.291 ]' 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.291 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.292 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.292 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.552 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:28.552 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:29.121 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.121 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:29.121 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.121 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.382 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.955 00:16:29.955 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.955 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.955 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.955 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.955 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.955 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.955 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.955 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.955 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.955 { 00:16:29.955 "cntlid": 39, 00:16:29.955 "qid": 0, 00:16:29.955 "state": "enabled", 00:16:29.955 "thread": "nvmf_tgt_poll_group_000", 00:16:29.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:29.955 "listen_address": { 00:16:29.955 "trtype": "TCP", 00:16:29.955 "adrfam": "IPv4", 00:16:29.956 "traddr": "10.0.0.2", 00:16:29.956 "trsvcid": "4420" 00:16:29.956 }, 00:16:29.956 "peer_address": { 00:16:29.956 "trtype": "TCP", 00:16:29.956 "adrfam": "IPv4", 00:16:29.956 "traddr": "10.0.0.1", 00:16:29.956 "trsvcid": "58538" 00:16:29.956 }, 00:16:29.956 "auth": { 00:16:29.956 "state": "completed", 00:16:29.956 "digest": "sha256", 00:16:29.956 "dhgroup": "ffdhe6144" 00:16:29.956 } 00:16:29.956 } 00:16:29.956 ]' 00:16:29.956 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.956 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.956 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.956 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.956 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.217 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.217 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.217 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.217 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:30.217 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:30.788 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.050 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.050 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.050 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.050 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.050 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.050 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.050 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.050 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.050 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.621 00:16:31.621 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.621 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.621 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.882 { 00:16:31.882 "cntlid": 41, 00:16:31.882 "qid": 0, 00:16:31.882 "state": "enabled", 00:16:31.882 "thread": "nvmf_tgt_poll_group_000", 00:16:31.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:31.882 "listen_address": { 00:16:31.882 "trtype": "TCP", 00:16:31.882 "adrfam": "IPv4", 00:16:31.882 "traddr": "10.0.0.2", 00:16:31.882 "trsvcid": "4420" 00:16:31.882 }, 00:16:31.882 "peer_address": { 00:16:31.882 "trtype": "TCP", 00:16:31.882 "adrfam": "IPv4", 00:16:31.882 "traddr": "10.0.0.1", 00:16:31.882 "trsvcid": "59656" 00:16:31.882 }, 00:16:31.882 "auth": { 00:16:31.882 "state": "completed", 00:16:31.882 "digest": "sha256", 00:16:31.882 "dhgroup": "ffdhe8192" 00:16:31.882 } 00:16:31.882 } 00:16:31.882 ]' 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.882 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.143 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:32.143 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:32.715 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.715 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:32.715 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.715 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.715 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.715 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.715 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:32.715 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.975 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.546 00:16:33.546 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.546 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.546 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.546 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.546 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.546 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.546 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.546 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.546 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.546 { 00:16:33.546 "cntlid": 43, 00:16:33.546 "qid": 0, 00:16:33.546 "state": "enabled", 00:16:33.546 "thread": "nvmf_tgt_poll_group_000", 00:16:33.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:33.546 "listen_address": { 00:16:33.546 "trtype": "TCP", 00:16:33.546 "adrfam": "IPv4", 00:16:33.546 "traddr": "10.0.0.2", 00:16:33.546 "trsvcid": "4420" 00:16:33.546 }, 00:16:33.546 "peer_address": { 00:16:33.546 "trtype": "TCP", 00:16:33.546 "adrfam": "IPv4", 00:16:33.546 "traddr": "10.0.0.1", 00:16:33.546 "trsvcid": "59682" 00:16:33.546 }, 00:16:33.546 "auth": { 00:16:33.546 "state": "completed", 00:16:33.546 "digest": "sha256", 00:16:33.546 "dhgroup": "ffdhe8192" 00:16:33.546 } 00:16:33.546 } 00:16:33.546 ]' 00:16:33.546 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.807 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.807 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.807 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.807 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.807 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.807 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.807 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.807 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:33.807 18:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.760 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.331 00:16:35.331 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.331 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.331 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.331 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.592 { 00:16:35.592 "cntlid": 45, 00:16:35.592 "qid": 0, 00:16:35.592 "state": "enabled", 00:16:35.592 "thread": "nvmf_tgt_poll_group_000", 00:16:35.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:35.592 "listen_address": { 00:16:35.592 "trtype": "TCP", 00:16:35.592 "adrfam": "IPv4", 00:16:35.592 "traddr": "10.0.0.2", 00:16:35.592 "trsvcid": "4420" 00:16:35.592 }, 00:16:35.592 "peer_address": { 00:16:35.592 "trtype": "TCP", 00:16:35.592 "adrfam": "IPv4", 00:16:35.592 "traddr": "10.0.0.1", 00:16:35.592 "trsvcid": "59722" 00:16:35.592 }, 00:16:35.592 "auth": { 00:16:35.592 "state": "completed", 00:16:35.592 "digest": "sha256", 00:16:35.592 "dhgroup": "ffdhe8192" 00:16:35.592 } 00:16:35.592 } 00:16:35.592 ]' 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.592 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.853 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:35.853 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:36.424 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.424 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:36.424 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.424 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.424 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.424 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.424 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.424 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.685 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.256 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.256 { 00:16:37.256 "cntlid": 47, 00:16:37.256 "qid": 0, 00:16:37.256 "state": "enabled", 00:16:37.256 "thread": "nvmf_tgt_poll_group_000", 00:16:37.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:37.256 "listen_address": { 00:16:37.256 "trtype": "TCP", 00:16:37.256 "adrfam": "IPv4", 00:16:37.256 "traddr": "10.0.0.2", 00:16:37.256 "trsvcid": "4420" 00:16:37.256 }, 00:16:37.256 "peer_address": { 00:16:37.256 "trtype": "TCP", 00:16:37.256 "adrfam": "IPv4", 00:16:37.256 "traddr": "10.0.0.1", 00:16:37.256 "trsvcid": "59762" 00:16:37.256 }, 00:16:37.256 "auth": { 00:16:37.256 "state": "completed", 00:16:37.256 "digest": "sha256", 00:16:37.256 "dhgroup": "ffdhe8192" 00:16:37.256 } 00:16:37.256 } 00:16:37.256 ]' 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.256 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.516 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.516 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.516 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.516 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.516 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.777 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:37.777 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.348 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.609 00:16:38.609 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.609 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.609 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.869 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.869 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.870 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.870 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.870 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.870 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.870 { 00:16:38.870 "cntlid": 49, 00:16:38.870 "qid": 0, 00:16:38.870 "state": "enabled", 00:16:38.870 "thread": "nvmf_tgt_poll_group_000", 00:16:38.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:38.870 "listen_address": { 00:16:38.870 "trtype": "TCP", 00:16:38.870 "adrfam": "IPv4", 00:16:38.870 "traddr": "10.0.0.2", 00:16:38.870 "trsvcid": "4420" 00:16:38.870 }, 00:16:38.870 "peer_address": { 00:16:38.870 "trtype": "TCP", 00:16:38.870 "adrfam": "IPv4", 00:16:38.870 "traddr": "10.0.0.1", 00:16:38.870 "trsvcid": "59794" 00:16:38.870 }, 00:16:38.870 "auth": { 00:16:38.870 "state": "completed", 00:16:38.870 "digest": "sha384", 00:16:38.870 "dhgroup": "null" 00:16:38.870 } 00:16:38.870 } 00:16:38.870 ]' 00:16:38.870 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.870 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.870 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.870 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.870 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.131 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.131 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.131 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.131 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:39.131 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:40.073 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.073 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:40.073 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.073 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.073 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.073 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.073 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.073 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.073 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.333 00:16:40.333 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.333 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.333 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.593 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.593 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.594 { 00:16:40.594 "cntlid": 51, 00:16:40.594 "qid": 0, 00:16:40.594 "state": "enabled", 00:16:40.594 "thread": "nvmf_tgt_poll_group_000", 00:16:40.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:40.594 "listen_address": { 00:16:40.594 "trtype": "TCP", 00:16:40.594 "adrfam": "IPv4", 00:16:40.594 "traddr": "10.0.0.2", 00:16:40.594 "trsvcid": "4420" 00:16:40.594 }, 00:16:40.594 "peer_address": { 00:16:40.594 "trtype": "TCP", 00:16:40.594 "adrfam": "IPv4", 00:16:40.594 "traddr": "10.0.0.1", 00:16:40.594 "trsvcid": "59814" 00:16:40.594 }, 00:16:40.594 "auth": { 00:16:40.594 "state": "completed", 00:16:40.594 "digest": "sha384", 00:16:40.594 "dhgroup": "null" 00:16:40.594 } 00:16:40.594 } 00:16:40.594 ]' 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.594 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.854 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:40.854 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:41.424 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.424 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:41.424 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.424 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.424 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.424 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.424 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.424 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.684 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:41.684 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.684 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.684 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:41.684 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:41.684 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.684 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.684 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.684 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.684 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.685 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.685 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.685 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.945 00:16:41.945 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.945 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.945 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.205 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.205 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.205 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.205 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.205 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.205 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.205 { 00:16:42.205 "cntlid": 53, 00:16:42.205 "qid": 0, 00:16:42.205 "state": "enabled", 00:16:42.205 "thread": "nvmf_tgt_poll_group_000", 00:16:42.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:42.206 "listen_address": { 00:16:42.206 "trtype": "TCP", 00:16:42.206 "adrfam": "IPv4", 00:16:42.206 "traddr": "10.0.0.2", 00:16:42.206 "trsvcid": "4420" 00:16:42.206 }, 00:16:42.206 "peer_address": { 00:16:42.206 "trtype": "TCP", 00:16:42.206 "adrfam": "IPv4", 00:16:42.206 "traddr": "10.0.0.1", 00:16:42.206 "trsvcid": "56480" 00:16:42.206 }, 00:16:42.206 "auth": { 00:16:42.206 "state": "completed", 00:16:42.206 "digest": "sha384", 00:16:42.206 "dhgroup": "null" 00:16:42.206 } 00:16:42.206 } 00:16:42.206 ]' 00:16:42.206 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.206 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.206 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.206 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:42.206 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.206 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.206 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.206 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.466 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:42.466 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:43.037 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.037 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:43.037 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.037 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.037 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.037 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.037 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.037 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.297 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.557 00:16:43.557 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.557 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.557 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.816 { 00:16:43.816 "cntlid": 55, 00:16:43.816 "qid": 0, 00:16:43.816 "state": "enabled", 00:16:43.816 "thread": "nvmf_tgt_poll_group_000", 00:16:43.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:43.816 "listen_address": { 00:16:43.816 "trtype": "TCP", 00:16:43.816 "adrfam": "IPv4", 00:16:43.816 "traddr": "10.0.0.2", 00:16:43.816 "trsvcid": "4420" 00:16:43.816 }, 00:16:43.816 "peer_address": { 00:16:43.816 "trtype": "TCP", 00:16:43.816 "adrfam": "IPv4", 00:16:43.816 "traddr": "10.0.0.1", 00:16:43.816 "trsvcid": "56504" 00:16:43.816 }, 00:16:43.816 "auth": { 00:16:43.816 "state": "completed", 00:16:43.816 "digest": "sha384", 00:16:43.816 "dhgroup": "null" 00:16:43.816 } 00:16:43.816 } 00:16:43.816 ]' 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.816 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.078 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:44.078 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:44.647 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.647 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:44.647 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.647 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.647 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.647 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.647 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.647 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.648 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.908 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.168 00:16:45.168 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.168 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.168 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.168 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.168 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.168 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.168 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.169 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.169 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.169 { 00:16:45.169 "cntlid": 57, 00:16:45.169 "qid": 0, 00:16:45.169 "state": "enabled", 00:16:45.169 "thread": "nvmf_tgt_poll_group_000", 00:16:45.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:45.169 "listen_address": { 00:16:45.169 "trtype": "TCP", 00:16:45.169 "adrfam": "IPv4", 00:16:45.169 "traddr": "10.0.0.2", 00:16:45.169 "trsvcid": "4420" 00:16:45.169 }, 00:16:45.169 "peer_address": { 00:16:45.169 "trtype": "TCP", 00:16:45.169 "adrfam": "IPv4", 00:16:45.169 "traddr": "10.0.0.1", 00:16:45.169 "trsvcid": "56520" 00:16:45.169 }, 00:16:45.169 "auth": { 00:16:45.169 "state": "completed", 00:16:45.169 "digest": "sha384", 00:16:45.169 "dhgroup": "ffdhe2048" 00:16:45.169 } 00:16:45.169 } 00:16:45.169 ]' 00:16:45.429 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.429 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.429 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.429 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.429 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.429 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.429 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.429 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.690 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:45.690 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:46.261 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.261 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:46.261 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.261 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.261 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.261 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.261 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.261 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.520 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:46.520 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.520 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.520 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.520 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.521 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.521 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.521 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.521 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.521 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.521 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.521 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.521 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.521 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.780 { 00:16:46.780 "cntlid": 59, 00:16:46.780 "qid": 0, 00:16:46.780 "state": "enabled", 00:16:46.780 "thread": "nvmf_tgt_poll_group_000", 00:16:46.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:46.780 "listen_address": { 00:16:46.780 "trtype": "TCP", 00:16:46.780 "adrfam": "IPv4", 00:16:46.780 "traddr": "10.0.0.2", 00:16:46.780 "trsvcid": "4420" 00:16:46.780 }, 00:16:46.780 "peer_address": { 00:16:46.780 "trtype": "TCP", 00:16:46.780 "adrfam": "IPv4", 00:16:46.780 "traddr": "10.0.0.1", 00:16:46.780 "trsvcid": "56556" 00:16:46.780 }, 00:16:46.780 "auth": { 00:16:46.780 "state": "completed", 00:16:46.780 "digest": "sha384", 00:16:46.780 "dhgroup": "ffdhe2048" 00:16:46.780 } 00:16:46.780 } 00:16:46.780 ]' 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.780 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.040 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.040 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.040 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.040 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.040 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.040 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:47.040 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.982 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.242 00:16:48.242 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.242 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.242 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.502 { 00:16:48.502 "cntlid": 61, 00:16:48.502 "qid": 0, 00:16:48.502 "state": "enabled", 00:16:48.502 "thread": "nvmf_tgt_poll_group_000", 00:16:48.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:48.502 "listen_address": { 00:16:48.502 "trtype": "TCP", 00:16:48.502 "adrfam": "IPv4", 00:16:48.502 "traddr": "10.0.0.2", 00:16:48.502 "trsvcid": "4420" 00:16:48.502 }, 00:16:48.502 "peer_address": { 00:16:48.502 "trtype": "TCP", 00:16:48.502 "adrfam": "IPv4", 00:16:48.502 "traddr": "10.0.0.1", 00:16:48.502 "trsvcid": "56578" 00:16:48.502 }, 00:16:48.502 "auth": { 00:16:48.502 "state": "completed", 00:16:48.502 "digest": "sha384", 00:16:48.502 "dhgroup": "ffdhe2048" 00:16:48.502 } 00:16:48.502 } 00:16:48.502 ]' 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.502 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.762 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:48.762 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:49.334 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.334 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:49.334 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.334 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.334 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.334 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.334 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.334 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.595 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.856 00:16:49.856 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.856 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.856 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.116 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.116 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.116 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.116 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.116 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.116 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.116 { 00:16:50.116 "cntlid": 63, 00:16:50.116 "qid": 0, 00:16:50.116 "state": "enabled", 00:16:50.116 "thread": "nvmf_tgt_poll_group_000", 00:16:50.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:50.116 "listen_address": { 00:16:50.116 "trtype": "TCP", 00:16:50.116 "adrfam": "IPv4", 00:16:50.116 "traddr": "10.0.0.2", 00:16:50.116 "trsvcid": "4420" 00:16:50.116 }, 00:16:50.116 "peer_address": { 00:16:50.116 "trtype": "TCP", 00:16:50.116 "adrfam": "IPv4", 00:16:50.116 "traddr": "10.0.0.1", 00:16:50.116 "trsvcid": "56608" 00:16:50.116 }, 00:16:50.116 "auth": { 00:16:50.116 "state": "completed", 00:16:50.116 "digest": "sha384", 00:16:50.116 "dhgroup": "ffdhe2048" 00:16:50.116 } 00:16:50.116 } 00:16:50.116 ]' 00:16:50.116 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.116 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.116 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.116 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.116 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.116 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.116 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.116 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.377 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:50.377 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:50.947 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.947 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.948 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.948 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.948 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.948 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.948 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.948 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.948 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.208 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.468 00:16:51.468 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.468 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.468 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.728 { 00:16:51.728 "cntlid": 65, 00:16:51.728 "qid": 0, 00:16:51.728 "state": "enabled", 00:16:51.728 "thread": "nvmf_tgt_poll_group_000", 00:16:51.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:51.728 "listen_address": { 00:16:51.728 "trtype": "TCP", 00:16:51.728 "adrfam": "IPv4", 00:16:51.728 "traddr": "10.0.0.2", 00:16:51.728 "trsvcid": "4420" 00:16:51.728 }, 00:16:51.728 "peer_address": { 00:16:51.728 "trtype": "TCP", 00:16:51.728 "adrfam": "IPv4", 00:16:51.728 "traddr": "10.0.0.1", 00:16:51.728 "trsvcid": "48142" 00:16:51.728 }, 00:16:51.728 "auth": { 00:16:51.728 "state": "completed", 00:16:51.728 "digest": "sha384", 00:16:51.728 "dhgroup": "ffdhe3072" 00:16:51.728 } 00:16:51.728 } 00:16:51.728 ]' 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.728 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.988 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:51.988 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:52.558 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.558 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:52.558 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.558 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.558 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.558 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.558 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:52.558 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.819 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.078 00:16:53.078 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.078 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.078 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.339 { 00:16:53.339 "cntlid": 67, 00:16:53.339 "qid": 0, 00:16:53.339 "state": "enabled", 00:16:53.339 "thread": "nvmf_tgt_poll_group_000", 00:16:53.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:53.339 "listen_address": { 00:16:53.339 "trtype": "TCP", 00:16:53.339 "adrfam": "IPv4", 00:16:53.339 "traddr": "10.0.0.2", 00:16:53.339 "trsvcid": "4420" 00:16:53.339 }, 00:16:53.339 "peer_address": { 00:16:53.339 "trtype": "TCP", 00:16:53.339 "adrfam": "IPv4", 00:16:53.339 "traddr": "10.0.0.1", 00:16:53.339 "trsvcid": "48182" 00:16:53.339 }, 00:16:53.339 "auth": { 00:16:53.339 "state": "completed", 00:16:53.339 "digest": "sha384", 00:16:53.339 "dhgroup": "ffdhe3072" 00:16:53.339 } 00:16:53.339 } 00:16:53.339 ]' 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.339 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.599 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:53.600 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:16:54.171 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.171 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:54.171 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.171 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.171 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.171 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.171 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.171 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.431 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.691 00:16:54.691 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.691 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.691 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.952 { 00:16:54.952 "cntlid": 69, 00:16:54.952 "qid": 0, 00:16:54.952 "state": "enabled", 00:16:54.952 "thread": "nvmf_tgt_poll_group_000", 00:16:54.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:54.952 "listen_address": { 00:16:54.952 "trtype": "TCP", 00:16:54.952 "adrfam": "IPv4", 00:16:54.952 "traddr": "10.0.0.2", 00:16:54.952 "trsvcid": "4420" 00:16:54.952 }, 00:16:54.952 "peer_address": { 00:16:54.952 "trtype": "TCP", 00:16:54.952 "adrfam": "IPv4", 00:16:54.952 "traddr": "10.0.0.1", 00:16:54.952 "trsvcid": "48212" 00:16:54.952 }, 00:16:54.952 "auth": { 00:16:54.952 "state": "completed", 00:16:54.952 "digest": "sha384", 00:16:54.952 "dhgroup": "ffdhe3072" 00:16:54.952 } 00:16:54.952 } 00:16:54.952 ]' 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.952 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.212 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:55.212 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:16:55.782 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.782 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:55.782 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.782 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.782 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.782 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.782 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:55.782 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.042 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.303 00:16:56.303 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.303 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.303 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.303 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.303 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.303 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.303 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.563 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.563 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.563 { 00:16:56.563 "cntlid": 71, 00:16:56.563 "qid": 0, 00:16:56.563 "state": "enabled", 00:16:56.563 "thread": "nvmf_tgt_poll_group_000", 00:16:56.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:56.563 "listen_address": { 00:16:56.563 "trtype": "TCP", 00:16:56.563 "adrfam": "IPv4", 00:16:56.563 "traddr": "10.0.0.2", 00:16:56.563 "trsvcid": "4420" 00:16:56.563 }, 00:16:56.563 "peer_address": { 00:16:56.563 "trtype": "TCP", 00:16:56.563 "adrfam": "IPv4", 00:16:56.563 "traddr": "10.0.0.1", 00:16:56.563 "trsvcid": "48252" 00:16:56.563 }, 00:16:56.563 "auth": { 00:16:56.563 "state": "completed", 00:16:56.563 "digest": "sha384", 00:16:56.563 "dhgroup": "ffdhe3072" 00:16:56.563 } 00:16:56.563 } 00:16:56.563 ]' 00:16:56.563 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.563 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.563 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.563 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.563 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.563 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.563 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.563 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.824 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:56.824 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:16:57.393 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.393 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:57.393 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.393 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.393 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.393 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.393 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.393 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:57.393 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.654 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.915 00:16:57.915 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.915 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.915 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.176 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.176 { 00:16:58.176 "cntlid": 73, 00:16:58.176 "qid": 0, 00:16:58.176 "state": "enabled", 00:16:58.176 "thread": "nvmf_tgt_poll_group_000", 00:16:58.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:58.176 "listen_address": { 00:16:58.176 "trtype": "TCP", 00:16:58.176 "adrfam": "IPv4", 00:16:58.176 "traddr": "10.0.0.2", 00:16:58.176 "trsvcid": "4420" 00:16:58.176 }, 00:16:58.176 "peer_address": { 00:16:58.176 "trtype": "TCP", 00:16:58.176 "adrfam": "IPv4", 00:16:58.176 "traddr": "10.0.0.1", 00:16:58.176 "trsvcid": "48286" 00:16:58.176 }, 00:16:58.176 "auth": { 00:16:58.176 "state": "completed", 00:16:58.176 "digest": "sha384", 00:16:58.176 "dhgroup": "ffdhe4096" 00:16:58.176 } 00:16:58.176 } 00:16:58.176 ]' 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.176 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.436 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:58.436 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:16:59.006 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.006 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.006 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.006 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.006 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.006 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.006 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:59.006 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.267 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.528 00:16:59.528 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.528 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.528 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.842 { 00:16:59.842 "cntlid": 75, 00:16:59.842 "qid": 0, 00:16:59.842 "state": "enabled", 00:16:59.842 "thread": "nvmf_tgt_poll_group_000", 00:16:59.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:59.842 "listen_address": { 00:16:59.842 "trtype": "TCP", 00:16:59.842 "adrfam": "IPv4", 00:16:59.842 "traddr": "10.0.0.2", 00:16:59.842 "trsvcid": "4420" 00:16:59.842 }, 00:16:59.842 "peer_address": { 00:16:59.842 "trtype": "TCP", 00:16:59.842 "adrfam": "IPv4", 00:16:59.842 "traddr": "10.0.0.1", 00:16:59.842 "trsvcid": "48308" 00:16:59.842 }, 00:16:59.842 "auth": { 00:16:59.842 "state": "completed", 00:16:59.842 "digest": "sha384", 00:16:59.842 "dhgroup": "ffdhe4096" 00:16:59.842 } 00:16:59.842 } 00:16:59.842 ]' 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.842 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.133 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:00.133 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:00.752 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.752 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.752 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.752 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.752 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.752 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.752 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:00.752 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.032 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.032 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.293 { 00:17:01.293 "cntlid": 77, 00:17:01.293 "qid": 0, 00:17:01.293 "state": "enabled", 00:17:01.293 "thread": "nvmf_tgt_poll_group_000", 00:17:01.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:01.293 "listen_address": { 00:17:01.293 "trtype": "TCP", 00:17:01.293 "adrfam": "IPv4", 00:17:01.293 "traddr": "10.0.0.2", 00:17:01.293 "trsvcid": "4420" 00:17:01.293 }, 00:17:01.293 "peer_address": { 00:17:01.293 "trtype": "TCP", 00:17:01.293 "adrfam": "IPv4", 00:17:01.293 "traddr": "10.0.0.1", 00:17:01.293 "trsvcid": "35762" 00:17:01.293 }, 00:17:01.293 "auth": { 00:17:01.293 "state": "completed", 00:17:01.293 "digest": "sha384", 00:17:01.293 "dhgroup": "ffdhe4096" 00:17:01.293 } 00:17:01.293 } 00:17:01.293 ]' 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.293 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.553 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.553 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.553 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.553 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.553 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.553 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:01.553 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.493 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.754 00:17:02.754 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.754 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.754 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.015 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.015 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.015 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.015 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.015 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.015 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.015 { 00:17:03.015 "cntlid": 79, 00:17:03.015 "qid": 0, 00:17:03.015 "state": "enabled", 00:17:03.015 "thread": "nvmf_tgt_poll_group_000", 00:17:03.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:03.015 "listen_address": { 00:17:03.015 "trtype": "TCP", 00:17:03.015 "adrfam": "IPv4", 00:17:03.015 "traddr": "10.0.0.2", 00:17:03.015 "trsvcid": "4420" 00:17:03.015 }, 00:17:03.015 "peer_address": { 00:17:03.015 "trtype": "TCP", 00:17:03.015 "adrfam": "IPv4", 00:17:03.015 "traddr": "10.0.0.1", 00:17:03.015 "trsvcid": "35798" 00:17:03.015 }, 00:17:03.015 "auth": { 00:17:03.015 "state": "completed", 00:17:03.015 "digest": "sha384", 00:17:03.015 "dhgroup": "ffdhe4096" 00:17:03.015 } 00:17:03.015 } 00:17:03.015 ]' 00:17:03.015 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.015 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.015 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.015 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.015 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.275 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.275 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.275 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.275 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:03.275 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:04.215 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.215 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:04.215 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.215 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.215 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.215 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.215 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.215 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.215 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.215 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.216 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.476 00:17:04.476 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.476 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.476 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.738 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.738 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.738 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.739 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.739 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.739 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.739 { 00:17:04.739 "cntlid": 81, 00:17:04.739 "qid": 0, 00:17:04.739 "state": "enabled", 00:17:04.739 "thread": "nvmf_tgt_poll_group_000", 00:17:04.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:04.739 "listen_address": { 00:17:04.739 "trtype": "TCP", 00:17:04.739 "adrfam": "IPv4", 00:17:04.739 "traddr": "10.0.0.2", 00:17:04.739 "trsvcid": "4420" 00:17:04.739 }, 00:17:04.739 "peer_address": { 00:17:04.739 "trtype": "TCP", 00:17:04.739 "adrfam": "IPv4", 00:17:04.739 "traddr": "10.0.0.1", 00:17:04.739 "trsvcid": "35824" 00:17:04.739 }, 00:17:04.739 "auth": { 00:17:04.739 "state": "completed", 00:17:04.739 "digest": "sha384", 00:17:04.739 "dhgroup": "ffdhe6144" 00:17:04.739 } 00:17:04.739 } 00:17:04.739 ]' 00:17:04.739 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.739 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.739 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.739 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.739 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.000 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.000 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.000 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.000 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:05.000 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:05.570 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.830 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.831 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.831 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.401 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.401 { 00:17:06.401 "cntlid": 83, 00:17:06.401 "qid": 0, 00:17:06.401 "state": "enabled", 00:17:06.401 "thread": "nvmf_tgt_poll_group_000", 00:17:06.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:06.401 "listen_address": { 00:17:06.401 "trtype": "TCP", 00:17:06.401 "adrfam": "IPv4", 00:17:06.401 "traddr": "10.0.0.2", 00:17:06.401 "trsvcid": "4420" 00:17:06.401 }, 00:17:06.401 "peer_address": { 00:17:06.401 "trtype": "TCP", 00:17:06.401 "adrfam": "IPv4", 00:17:06.401 "traddr": "10.0.0.1", 00:17:06.401 "trsvcid": "35858" 00:17:06.401 }, 00:17:06.401 "auth": { 00:17:06.401 "state": "completed", 00:17:06.401 "digest": "sha384", 00:17:06.401 "dhgroup": "ffdhe6144" 00:17:06.401 } 00:17:06.401 } 00:17:06.401 ]' 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.401 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.661 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.661 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.661 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.661 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.661 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.921 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:06.921 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:07.491 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.491 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:07.491 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.491 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.491 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.491 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.491 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:07.491 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.751 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.012 00:17:08.012 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.012 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.012 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.272 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.272 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.272 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.272 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.272 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.272 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.272 { 00:17:08.272 "cntlid": 85, 00:17:08.272 "qid": 0, 00:17:08.272 "state": "enabled", 00:17:08.272 "thread": "nvmf_tgt_poll_group_000", 00:17:08.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:08.272 "listen_address": { 00:17:08.272 "trtype": "TCP", 00:17:08.272 "adrfam": "IPv4", 00:17:08.272 "traddr": "10.0.0.2", 00:17:08.272 "trsvcid": "4420" 00:17:08.272 }, 00:17:08.273 "peer_address": { 00:17:08.273 "trtype": "TCP", 00:17:08.273 "adrfam": "IPv4", 00:17:08.273 "traddr": "10.0.0.1", 00:17:08.273 "trsvcid": "35894" 00:17:08.273 }, 00:17:08.273 "auth": { 00:17:08.273 "state": "completed", 00:17:08.273 "digest": "sha384", 00:17:08.273 "dhgroup": "ffdhe6144" 00:17:08.273 } 00:17:08.273 } 00:17:08.273 ]' 00:17:08.273 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.273 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.273 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.273 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.273 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.273 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.273 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.273 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.533 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:08.533 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:09.102 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.102 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:09.102 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.102 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.102 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.102 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.102 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:09.102 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.363 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.623 00:17:09.623 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.623 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.623 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.884 { 00:17:09.884 "cntlid": 87, 00:17:09.884 "qid": 0, 00:17:09.884 "state": "enabled", 00:17:09.884 "thread": "nvmf_tgt_poll_group_000", 00:17:09.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:09.884 "listen_address": { 00:17:09.884 "trtype": "TCP", 00:17:09.884 "adrfam": "IPv4", 00:17:09.884 "traddr": "10.0.0.2", 00:17:09.884 "trsvcid": "4420" 00:17:09.884 }, 00:17:09.884 "peer_address": { 00:17:09.884 "trtype": "TCP", 00:17:09.884 "adrfam": "IPv4", 00:17:09.884 "traddr": "10.0.0.1", 00:17:09.884 "trsvcid": "35930" 00:17:09.884 }, 00:17:09.884 "auth": { 00:17:09.884 "state": "completed", 00:17:09.884 "digest": "sha384", 00:17:09.884 "dhgroup": "ffdhe6144" 00:17:09.884 } 00:17:09.884 } 00:17:09.884 ]' 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.884 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.144 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:10.144 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:10.714 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.714 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:10.714 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.714 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.714 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.714 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.714 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.714 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:10.715 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.974 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.545 00:17:11.545 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.545 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.545 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.545 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.545 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.545 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.545 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.805 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.805 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.805 { 00:17:11.805 "cntlid": 89, 00:17:11.805 "qid": 0, 00:17:11.805 "state": "enabled", 00:17:11.805 "thread": "nvmf_tgt_poll_group_000", 00:17:11.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:11.805 "listen_address": { 00:17:11.805 "trtype": "TCP", 00:17:11.805 "adrfam": "IPv4", 00:17:11.805 "traddr": "10.0.0.2", 00:17:11.805 "trsvcid": "4420" 00:17:11.805 }, 00:17:11.805 "peer_address": { 00:17:11.805 "trtype": "TCP", 00:17:11.805 "adrfam": "IPv4", 00:17:11.805 "traddr": "10.0.0.1", 00:17:11.805 "trsvcid": "45596" 00:17:11.805 }, 00:17:11.805 "auth": { 00:17:11.805 "state": "completed", 00:17:11.805 "digest": "sha384", 00:17:11.805 "dhgroup": "ffdhe8192" 00:17:11.805 } 00:17:11.805 } 00:17:11.805 ]' 00:17:11.805 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.805 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.805 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.805 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.805 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.805 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.806 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.806 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.066 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:12.066 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:12.635 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.635 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.635 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.635 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.635 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.635 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.635 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:12.635 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.894 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.465 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.465 { 00:17:13.465 "cntlid": 91, 00:17:13.465 "qid": 0, 00:17:13.465 "state": "enabled", 00:17:13.465 "thread": "nvmf_tgt_poll_group_000", 00:17:13.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:13.465 "listen_address": { 00:17:13.465 "trtype": "TCP", 00:17:13.465 "adrfam": "IPv4", 00:17:13.465 "traddr": "10.0.0.2", 00:17:13.465 "trsvcid": "4420" 00:17:13.465 }, 00:17:13.465 "peer_address": { 00:17:13.465 "trtype": "TCP", 00:17:13.465 "adrfam": "IPv4", 00:17:13.465 "traddr": "10.0.0.1", 00:17:13.465 "trsvcid": "45632" 00:17:13.465 }, 00:17:13.465 "auth": { 00:17:13.465 "state": "completed", 00:17:13.465 "digest": "sha384", 00:17:13.465 "dhgroup": "ffdhe8192" 00:17:13.465 } 00:17:13.465 } 00:17:13.465 ]' 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.465 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.725 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.725 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.725 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.725 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:13.725 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:14.670 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.671 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.671 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.671 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.671 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.671 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.671 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.671 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.671 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.242 00:17:15.242 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.242 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.242 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.242 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.242 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.242 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.242 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.242 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.242 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.242 { 00:17:15.242 "cntlid": 93, 00:17:15.242 "qid": 0, 00:17:15.242 "state": "enabled", 00:17:15.242 "thread": "nvmf_tgt_poll_group_000", 00:17:15.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:15.242 "listen_address": { 00:17:15.242 "trtype": "TCP", 00:17:15.242 "adrfam": "IPv4", 00:17:15.242 "traddr": "10.0.0.2", 00:17:15.242 "trsvcid": "4420" 00:17:15.242 }, 00:17:15.242 "peer_address": { 00:17:15.242 "trtype": "TCP", 00:17:15.242 "adrfam": "IPv4", 00:17:15.242 "traddr": "10.0.0.1", 00:17:15.242 "trsvcid": "45668" 00:17:15.242 }, 00:17:15.242 "auth": { 00:17:15.242 "state": "completed", 00:17:15.242 "digest": "sha384", 00:17:15.242 "dhgroup": "ffdhe8192" 00:17:15.242 } 00:17:15.242 } 00:17:15.242 ]' 00:17:15.242 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.502 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.502 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.502 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.502 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.502 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.502 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.502 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.763 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:15.763 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:16.332 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.332 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:16.332 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.332 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.332 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.332 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.332 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:16.332 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.592 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.163 00:17:17.163 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.163 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.163 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.163 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.163 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.163 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.163 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.163 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.424 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.424 { 00:17:17.424 "cntlid": 95, 00:17:17.424 "qid": 0, 00:17:17.424 "state": "enabled", 00:17:17.424 "thread": "nvmf_tgt_poll_group_000", 00:17:17.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:17.424 "listen_address": { 00:17:17.424 "trtype": "TCP", 00:17:17.424 "adrfam": "IPv4", 00:17:17.424 "traddr": "10.0.0.2", 00:17:17.424 "trsvcid": "4420" 00:17:17.424 }, 00:17:17.424 "peer_address": { 00:17:17.424 "trtype": "TCP", 00:17:17.424 "adrfam": "IPv4", 00:17:17.424 "traddr": "10.0.0.1", 00:17:17.424 "trsvcid": "45700" 00:17:17.424 }, 00:17:17.424 "auth": { 00:17:17.424 "state": "completed", 00:17:17.424 "digest": "sha384", 00:17:17.424 "dhgroup": "ffdhe8192" 00:17:17.424 } 00:17:17.424 } 00:17:17.424 ]' 00:17:17.424 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.424 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.424 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.424 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.424 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.424 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.424 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.424 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.685 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:17.685 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:18.255 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.255 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:18.255 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.255 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.255 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.255 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:18.255 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.255 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.255 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:18.255 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.515 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.516 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.776 00:17:18.776 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.776 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.776 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.776 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.776 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.776 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.776 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.776 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.776 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.776 { 00:17:18.776 "cntlid": 97, 00:17:18.776 "qid": 0, 00:17:18.776 "state": "enabled", 00:17:18.776 "thread": "nvmf_tgt_poll_group_000", 00:17:18.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:18.776 "listen_address": { 00:17:18.776 "trtype": "TCP", 00:17:18.776 "adrfam": "IPv4", 00:17:18.776 "traddr": "10.0.0.2", 00:17:18.776 "trsvcid": "4420" 00:17:18.776 }, 00:17:18.776 "peer_address": { 00:17:18.776 "trtype": "TCP", 00:17:18.776 "adrfam": "IPv4", 00:17:18.776 "traddr": "10.0.0.1", 00:17:18.776 "trsvcid": "45726" 00:17:18.776 }, 00:17:18.776 "auth": { 00:17:18.776 "state": "completed", 00:17:18.776 "digest": "sha512", 00:17:18.776 "dhgroup": "null" 00:17:18.776 } 00:17:18.776 } 00:17:18.776 ]' 00:17:18.776 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.037 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.037 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.037 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:19.037 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.037 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.037 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.037 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.297 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:19.297 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:19.867 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.867 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:19.867 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.867 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.867 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.867 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.867 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:19.867 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.127 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.388 00:17:20.388 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.388 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.388 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.388 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.388 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.388 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.388 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.649 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.649 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.649 { 00:17:20.649 "cntlid": 99, 00:17:20.649 "qid": 0, 00:17:20.649 "state": "enabled", 00:17:20.649 "thread": "nvmf_tgt_poll_group_000", 00:17:20.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:20.649 "listen_address": { 00:17:20.649 "trtype": "TCP", 00:17:20.649 "adrfam": "IPv4", 00:17:20.649 "traddr": "10.0.0.2", 00:17:20.649 "trsvcid": "4420" 00:17:20.649 }, 00:17:20.649 "peer_address": { 00:17:20.649 "trtype": "TCP", 00:17:20.649 "adrfam": "IPv4", 00:17:20.649 "traddr": "10.0.0.1", 00:17:20.649 "trsvcid": "45744" 00:17:20.649 }, 00:17:20.649 "auth": { 00:17:20.649 "state": "completed", 00:17:20.649 "digest": "sha512", 00:17:20.649 "dhgroup": "null" 00:17:20.649 } 00:17:20.649 } 00:17:20.649 ]' 00:17:20.649 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.649 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.649 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.649 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.649 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.649 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.649 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.649 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.910 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:20.910 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:21.480 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.480 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.480 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.480 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.480 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.480 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.480 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.480 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.741 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.002 00:17:22.002 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.003 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.003 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.003 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.003 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.003 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.003 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.263 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.263 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.264 { 00:17:22.264 "cntlid": 101, 00:17:22.264 "qid": 0, 00:17:22.264 "state": "enabled", 00:17:22.264 "thread": "nvmf_tgt_poll_group_000", 00:17:22.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:22.264 "listen_address": { 00:17:22.264 "trtype": "TCP", 00:17:22.264 "adrfam": "IPv4", 00:17:22.264 "traddr": "10.0.0.2", 00:17:22.264 "trsvcid": "4420" 00:17:22.264 }, 00:17:22.264 "peer_address": { 00:17:22.264 "trtype": "TCP", 00:17:22.264 "adrfam": "IPv4", 00:17:22.264 "traddr": "10.0.0.1", 00:17:22.264 "trsvcid": "35756" 00:17:22.264 }, 00:17:22.264 "auth": { 00:17:22.264 "state": "completed", 00:17:22.264 "digest": "sha512", 00:17:22.264 "dhgroup": "null" 00:17:22.264 } 00:17:22.264 } 00:17:22.264 ]' 00:17:22.264 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.264 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.264 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.264 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.264 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.264 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.264 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.264 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.524 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:22.524 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:23.094 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.094 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.094 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.094 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.094 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.094 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.094 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.355 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.615 00:17:23.615 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.615 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.615 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.615 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.615 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.615 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.615 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.615 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.615 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.615 { 00:17:23.615 "cntlid": 103, 00:17:23.615 "qid": 0, 00:17:23.615 "state": "enabled", 00:17:23.615 "thread": "nvmf_tgt_poll_group_000", 00:17:23.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:23.615 "listen_address": { 00:17:23.615 "trtype": "TCP", 00:17:23.615 "adrfam": "IPv4", 00:17:23.615 "traddr": "10.0.0.2", 00:17:23.615 "trsvcid": "4420" 00:17:23.615 }, 00:17:23.615 "peer_address": { 00:17:23.615 "trtype": "TCP", 00:17:23.615 "adrfam": "IPv4", 00:17:23.615 "traddr": "10.0.0.1", 00:17:23.615 "trsvcid": "35782" 00:17:23.615 }, 00:17:23.615 "auth": { 00:17:23.615 "state": "completed", 00:17:23.615 "digest": "sha512", 00:17:23.615 "dhgroup": "null" 00:17:23.615 } 00:17:23.615 } 00:17:23.615 ]' 00:17:23.615 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.875 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.875 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.875 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:23.875 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.875 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.875 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.875 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.134 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:24.134 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:24.706 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.706 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:24.706 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.706 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.706 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.706 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.706 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.706 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:24.706 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.965 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.965 00:17:25.226 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.226 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.226 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.226 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.226 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.226 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.226 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.226 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.226 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.226 { 00:17:25.226 "cntlid": 105, 00:17:25.226 "qid": 0, 00:17:25.226 "state": "enabled", 00:17:25.226 "thread": "nvmf_tgt_poll_group_000", 00:17:25.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:25.226 "listen_address": { 00:17:25.226 "trtype": "TCP", 00:17:25.226 "adrfam": "IPv4", 00:17:25.226 "traddr": "10.0.0.2", 00:17:25.226 "trsvcid": "4420" 00:17:25.226 }, 00:17:25.226 "peer_address": { 00:17:25.226 "trtype": "TCP", 00:17:25.226 "adrfam": "IPv4", 00:17:25.226 "traddr": "10.0.0.1", 00:17:25.226 "trsvcid": "35806" 00:17:25.226 }, 00:17:25.226 "auth": { 00:17:25.226 "state": "completed", 00:17:25.226 "digest": "sha512", 00:17:25.226 "dhgroup": "ffdhe2048" 00:17:25.226 } 00:17:25.226 } 00:17:25.226 ]' 00:17:25.226 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.486 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.486 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.486 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.486 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.486 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.486 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.486 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.746 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:25.746 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:26.315 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.315 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.315 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.315 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.315 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.315 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.315 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:26.315 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.575 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.835 00:17:26.835 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.835 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.835 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.835 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.835 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.835 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.835 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.835 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.835 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.835 { 00:17:26.835 "cntlid": 107, 00:17:26.835 "qid": 0, 00:17:26.835 "state": "enabled", 00:17:26.835 "thread": "nvmf_tgt_poll_group_000", 00:17:26.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:26.835 "listen_address": { 00:17:26.835 "trtype": "TCP", 00:17:26.835 "adrfam": "IPv4", 00:17:26.835 "traddr": "10.0.0.2", 00:17:26.835 "trsvcid": "4420" 00:17:26.835 }, 00:17:26.835 "peer_address": { 00:17:26.835 "trtype": "TCP", 00:17:26.835 "adrfam": "IPv4", 00:17:26.835 "traddr": "10.0.0.1", 00:17:26.835 "trsvcid": "35828" 00:17:26.835 }, 00:17:26.835 "auth": { 00:17:26.835 "state": "completed", 00:17:26.835 "digest": "sha512", 00:17:26.835 "dhgroup": "ffdhe2048" 00:17:26.835 } 00:17:26.835 } 00:17:26.835 ]' 00:17:26.835 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.095 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.095 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.095 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.095 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.095 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.095 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.095 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.356 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:27.356 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:27.926 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.926 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.926 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.926 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.926 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.926 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.926 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.926 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.185 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.445 00:17:28.445 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.445 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.445 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.446 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.446 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.446 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.446 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.446 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.446 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.446 { 00:17:28.446 "cntlid": 109, 00:17:28.446 "qid": 0, 00:17:28.446 "state": "enabled", 00:17:28.446 "thread": "nvmf_tgt_poll_group_000", 00:17:28.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:28.446 "listen_address": { 00:17:28.446 "trtype": "TCP", 00:17:28.446 "adrfam": "IPv4", 00:17:28.446 "traddr": "10.0.0.2", 00:17:28.446 "trsvcid": "4420" 00:17:28.446 }, 00:17:28.446 "peer_address": { 00:17:28.446 "trtype": "TCP", 00:17:28.446 "adrfam": "IPv4", 00:17:28.446 "traddr": "10.0.0.1", 00:17:28.446 "trsvcid": "35852" 00:17:28.446 }, 00:17:28.446 "auth": { 00:17:28.446 "state": "completed", 00:17:28.446 "digest": "sha512", 00:17:28.446 "dhgroup": "ffdhe2048" 00:17:28.446 } 00:17:28.446 } 00:17:28.446 ]' 00:17:28.446 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.706 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.706 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.706 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.706 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.706 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.706 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.706 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.966 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:28.966 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:29.537 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.537 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.537 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.537 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.537 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.537 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.537 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.537 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.797 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:29.797 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.797 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.797 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:29.797 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.797 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.798 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:29.798 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.798 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.798 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.798 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.798 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.798 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.058 00:17:30.058 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.058 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.058 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.058 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.058 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.058 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.058 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.318 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.318 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.318 { 00:17:30.318 "cntlid": 111, 00:17:30.318 "qid": 0, 00:17:30.318 "state": "enabled", 00:17:30.318 "thread": "nvmf_tgt_poll_group_000", 00:17:30.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:30.318 "listen_address": { 00:17:30.318 "trtype": "TCP", 00:17:30.318 "adrfam": "IPv4", 00:17:30.318 "traddr": "10.0.0.2", 00:17:30.318 "trsvcid": "4420" 00:17:30.318 }, 00:17:30.318 "peer_address": { 00:17:30.318 "trtype": "TCP", 00:17:30.318 "adrfam": "IPv4", 00:17:30.318 "traddr": "10.0.0.1", 00:17:30.318 "trsvcid": "35896" 00:17:30.318 }, 00:17:30.318 "auth": { 00:17:30.318 "state": "completed", 00:17:30.318 "digest": "sha512", 00:17:30.318 "dhgroup": "ffdhe2048" 00:17:30.318 } 00:17:30.318 } 00:17:30.318 ]' 00:17:30.318 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.318 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.318 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.318 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.318 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.318 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.318 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.318 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.579 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:30.579 18:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:31.150 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.150 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.150 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.150 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.150 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.150 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.150 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.150 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:31.150 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:31.409 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:31.409 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.409 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.409 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:31.409 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.409 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.409 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.409 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.409 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.409 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.410 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.410 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.410 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.670 00:17:31.670 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.670 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.670 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.670 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.670 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.670 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.670 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.670 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.670 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.670 { 00:17:31.670 "cntlid": 113, 00:17:31.670 "qid": 0, 00:17:31.670 "state": "enabled", 00:17:31.670 "thread": "nvmf_tgt_poll_group_000", 00:17:31.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:31.670 "listen_address": { 00:17:31.670 "trtype": "TCP", 00:17:31.670 "adrfam": "IPv4", 00:17:31.670 "traddr": "10.0.0.2", 00:17:31.670 "trsvcid": "4420" 00:17:31.670 }, 00:17:31.670 "peer_address": { 00:17:31.670 "trtype": "TCP", 00:17:31.670 "adrfam": "IPv4", 00:17:31.670 "traddr": "10.0.0.1", 00:17:31.670 "trsvcid": "49234" 00:17:31.670 }, 00:17:31.670 "auth": { 00:17:31.670 "state": "completed", 00:17:31.670 "digest": "sha512", 00:17:31.670 "dhgroup": "ffdhe3072" 00:17:31.670 } 00:17:31.670 } 00:17:31.670 ]' 00:17:31.670 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.930 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.931 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.931 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.931 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.931 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.931 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.931 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.190 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:32.190 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:32.760 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.760 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:32.760 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.760 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.760 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.760 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.760 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:32.760 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.021 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.282 00:17:33.282 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.282 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.282 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.282 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.282 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.282 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.282 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.282 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.282 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.282 { 00:17:33.282 "cntlid": 115, 00:17:33.282 "qid": 0, 00:17:33.282 "state": "enabled", 00:17:33.282 "thread": "nvmf_tgt_poll_group_000", 00:17:33.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:33.282 "listen_address": { 00:17:33.282 "trtype": "TCP", 00:17:33.282 "adrfam": "IPv4", 00:17:33.282 "traddr": "10.0.0.2", 00:17:33.282 "trsvcid": "4420" 00:17:33.282 }, 00:17:33.282 "peer_address": { 00:17:33.282 "trtype": "TCP", 00:17:33.282 "adrfam": "IPv4", 00:17:33.282 "traddr": "10.0.0.1", 00:17:33.282 "trsvcid": "49258" 00:17:33.282 }, 00:17:33.282 "auth": { 00:17:33.282 "state": "completed", 00:17:33.282 "digest": "sha512", 00:17:33.282 "dhgroup": "ffdhe3072" 00:17:33.282 } 00:17:33.282 } 00:17:33.282 ]' 00:17:33.282 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.543 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.543 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.543 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.543 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.543 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.543 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.543 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.803 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:33.803 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:34.372 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.372 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.372 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.372 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.372 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.372 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.372 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:34.372 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:34.631 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:34.631 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.631 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.631 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:34.631 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.631 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.632 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.632 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.632 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.632 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.632 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.632 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.632 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.632 00:17:34.632 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.632 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.632 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.892 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.892 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.892 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.892 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.892 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.892 { 00:17:34.892 "cntlid": 117, 00:17:34.892 "qid": 0, 00:17:34.892 "state": "enabled", 00:17:34.892 "thread": "nvmf_tgt_poll_group_000", 00:17:34.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:34.892 "listen_address": { 00:17:34.892 "trtype": "TCP", 00:17:34.892 "adrfam": "IPv4", 00:17:34.892 "traddr": "10.0.0.2", 00:17:34.892 "trsvcid": "4420" 00:17:34.892 }, 00:17:34.892 "peer_address": { 00:17:34.892 "trtype": "TCP", 00:17:34.892 "adrfam": "IPv4", 00:17:34.892 "traddr": "10.0.0.1", 00:17:34.892 "trsvcid": "49278" 00:17:34.892 }, 00:17:34.892 "auth": { 00:17:34.892 "state": "completed", 00:17:34.892 "digest": "sha512", 00:17:34.892 "dhgroup": "ffdhe3072" 00:17:34.892 } 00:17:34.892 } 00:17:34.892 ]' 00:17:34.892 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.892 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.892 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.153 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.153 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.153 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.153 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.153 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.412 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:35.412 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:35.984 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.984 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:35.984 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.984 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.984 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.984 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.984 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.984 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.245 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.506 00:17:36.506 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.506 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.506 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.506 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.506 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.506 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.506 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.506 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.506 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.506 { 00:17:36.506 "cntlid": 119, 00:17:36.506 "qid": 0, 00:17:36.506 "state": "enabled", 00:17:36.506 "thread": "nvmf_tgt_poll_group_000", 00:17:36.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:36.506 "listen_address": { 00:17:36.506 "trtype": "TCP", 00:17:36.506 "adrfam": "IPv4", 00:17:36.506 "traddr": "10.0.0.2", 00:17:36.506 "trsvcid": "4420" 00:17:36.506 }, 00:17:36.506 "peer_address": { 00:17:36.506 "trtype": "TCP", 00:17:36.506 "adrfam": "IPv4", 00:17:36.506 "traddr": "10.0.0.1", 00:17:36.506 "trsvcid": "49310" 00:17:36.506 }, 00:17:36.506 "auth": { 00:17:36.506 "state": "completed", 00:17:36.506 "digest": "sha512", 00:17:36.506 "dhgroup": "ffdhe3072" 00:17:36.506 } 00:17:36.506 } 00:17:36.506 ]' 00:17:36.506 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.767 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.767 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.767 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.767 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.767 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.767 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.767 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.028 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:37.028 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:37.597 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.597 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.597 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.597 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.597 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.597 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.597 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.597 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:37.597 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.858 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.119 00:17:38.119 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.119 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.119 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.119 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.119 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.119 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.119 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.379 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.379 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.379 { 00:17:38.379 "cntlid": 121, 00:17:38.379 "qid": 0, 00:17:38.379 "state": "enabled", 00:17:38.379 "thread": "nvmf_tgt_poll_group_000", 00:17:38.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:38.379 "listen_address": { 00:17:38.379 "trtype": "TCP", 00:17:38.379 "adrfam": "IPv4", 00:17:38.379 "traddr": "10.0.0.2", 00:17:38.379 "trsvcid": "4420" 00:17:38.379 }, 00:17:38.379 "peer_address": { 00:17:38.379 "trtype": "TCP", 00:17:38.379 "adrfam": "IPv4", 00:17:38.379 "traddr": "10.0.0.1", 00:17:38.379 "trsvcid": "49332" 00:17:38.379 }, 00:17:38.379 "auth": { 00:17:38.379 "state": "completed", 00:17:38.379 "digest": "sha512", 00:17:38.379 "dhgroup": "ffdhe4096" 00:17:38.379 } 00:17:38.379 } 00:17:38.379 ]' 00:17:38.379 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.379 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.379 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.379 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.379 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.379 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.379 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.379 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.640 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:38.640 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:39.210 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.210 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:39.210 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.210 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.210 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.210 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.210 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:39.210 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:39.470 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:39.470 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.470 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.470 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:39.471 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.471 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.471 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.471 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.471 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.471 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.471 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.471 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.471 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.731 00:17:39.731 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.731 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.731 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.731 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.731 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.731 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.731 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.731 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.731 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.731 { 00:17:39.731 "cntlid": 123, 00:17:39.731 "qid": 0, 00:17:39.731 "state": "enabled", 00:17:39.731 "thread": "nvmf_tgt_poll_group_000", 00:17:39.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:39.731 "listen_address": { 00:17:39.731 "trtype": "TCP", 00:17:39.731 "adrfam": "IPv4", 00:17:39.731 "traddr": "10.0.0.2", 00:17:39.731 "trsvcid": "4420" 00:17:39.731 }, 00:17:39.731 "peer_address": { 00:17:39.731 "trtype": "TCP", 00:17:39.731 "adrfam": "IPv4", 00:17:39.731 "traddr": "10.0.0.1", 00:17:39.731 "trsvcid": "49374" 00:17:39.731 }, 00:17:39.731 "auth": { 00:17:39.731 "state": "completed", 00:17:39.731 "digest": "sha512", 00:17:39.731 "dhgroup": "ffdhe4096" 00:17:39.731 } 00:17:39.731 } 00:17:39.731 ]' 00:17:39.731 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.990 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.990 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.990 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.990 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.990 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.990 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.990 18:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.251 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:40.251 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:40.821 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.822 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:40.822 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.822 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.822 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.822 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.822 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:40.822 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.081 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.342 00:17:41.342 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.342 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.342 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.342 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.342 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.342 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.342 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.602 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.602 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.602 { 00:17:41.602 "cntlid": 125, 00:17:41.602 "qid": 0, 00:17:41.602 "state": "enabled", 00:17:41.602 "thread": "nvmf_tgt_poll_group_000", 00:17:41.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:41.602 "listen_address": { 00:17:41.602 "trtype": "TCP", 00:17:41.602 "adrfam": "IPv4", 00:17:41.602 "traddr": "10.0.0.2", 00:17:41.602 "trsvcid": "4420" 00:17:41.602 }, 00:17:41.602 "peer_address": { 00:17:41.602 "trtype": "TCP", 00:17:41.602 "adrfam": "IPv4", 00:17:41.602 "traddr": "10.0.0.1", 00:17:41.602 "trsvcid": "53354" 00:17:41.602 }, 00:17:41.602 "auth": { 00:17:41.602 "state": "completed", 00:17:41.602 "digest": "sha512", 00:17:41.602 "dhgroup": "ffdhe4096" 00:17:41.602 } 00:17:41.602 } 00:17:41.602 ]' 00:17:41.602 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.602 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.602 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.603 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.603 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.603 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.603 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.603 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.863 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:41.863 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:42.432 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.432 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.432 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.432 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.432 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.432 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.432 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.432 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.693 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.953 00:17:42.953 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.953 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.953 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.213 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.213 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.214 { 00:17:43.214 "cntlid": 127, 00:17:43.214 "qid": 0, 00:17:43.214 "state": "enabled", 00:17:43.214 "thread": "nvmf_tgt_poll_group_000", 00:17:43.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:43.214 "listen_address": { 00:17:43.214 "trtype": "TCP", 00:17:43.214 "adrfam": "IPv4", 00:17:43.214 "traddr": "10.0.0.2", 00:17:43.214 "trsvcid": "4420" 00:17:43.214 }, 00:17:43.214 "peer_address": { 00:17:43.214 "trtype": "TCP", 00:17:43.214 "adrfam": "IPv4", 00:17:43.214 "traddr": "10.0.0.1", 00:17:43.214 "trsvcid": "53384" 00:17:43.214 }, 00:17:43.214 "auth": { 00:17:43.214 "state": "completed", 00:17:43.214 "digest": "sha512", 00:17:43.214 "dhgroup": "ffdhe4096" 00:17:43.214 } 00:17:43.214 } 00:17:43.214 ]' 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.214 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.474 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:43.474 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:44.043 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.043 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:44.043 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.043 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.043 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.043 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.043 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.043 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:44.043 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.303 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.304 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.304 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.563 00:17:44.563 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.563 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.563 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.823 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.823 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.823 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.823 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.823 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.823 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.823 { 00:17:44.823 "cntlid": 129, 00:17:44.823 "qid": 0, 00:17:44.823 "state": "enabled", 00:17:44.823 "thread": "nvmf_tgt_poll_group_000", 00:17:44.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:44.824 "listen_address": { 00:17:44.824 "trtype": "TCP", 00:17:44.824 "adrfam": "IPv4", 00:17:44.824 "traddr": "10.0.0.2", 00:17:44.824 "trsvcid": "4420" 00:17:44.824 }, 00:17:44.824 "peer_address": { 00:17:44.824 "trtype": "TCP", 00:17:44.824 "adrfam": "IPv4", 00:17:44.824 "traddr": "10.0.0.1", 00:17:44.824 "trsvcid": "53402" 00:17:44.824 }, 00:17:44.824 "auth": { 00:17:44.824 "state": "completed", 00:17:44.824 "digest": "sha512", 00:17:44.824 "dhgroup": "ffdhe6144" 00:17:44.824 } 00:17:44.824 } 00:17:44.824 ]' 00:17:44.824 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.824 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.824 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.824 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.824 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.084 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.084 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.084 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.084 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:45.084 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:46.023 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.024 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.024 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.024 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.024 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.024 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.024 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.024 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.024 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.284 00:17:46.284 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.284 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.284 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.545 { 00:17:46.545 "cntlid": 131, 00:17:46.545 "qid": 0, 00:17:46.545 "state": "enabled", 00:17:46.545 "thread": "nvmf_tgt_poll_group_000", 00:17:46.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:46.545 "listen_address": { 00:17:46.545 "trtype": "TCP", 00:17:46.545 "adrfam": "IPv4", 00:17:46.545 "traddr": "10.0.0.2", 00:17:46.545 "trsvcid": "4420" 00:17:46.545 }, 00:17:46.545 "peer_address": { 00:17:46.545 "trtype": "TCP", 00:17:46.545 "adrfam": "IPv4", 00:17:46.545 "traddr": "10.0.0.1", 00:17:46.545 "trsvcid": "53434" 00:17:46.545 }, 00:17:46.545 "auth": { 00:17:46.545 "state": "completed", 00:17:46.545 "digest": "sha512", 00:17:46.545 "dhgroup": "ffdhe6144" 00:17:46.545 } 00:17:46.545 } 00:17:46.545 ]' 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.545 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.805 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:46.805 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:47.375 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.375 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:47.375 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.375 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.635 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.895 00:17:48.155 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.155 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.155 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.155 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.155 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.155 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.155 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.155 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.155 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.155 { 00:17:48.155 "cntlid": 133, 00:17:48.155 "qid": 0, 00:17:48.155 "state": "enabled", 00:17:48.155 "thread": "nvmf_tgt_poll_group_000", 00:17:48.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:48.155 "listen_address": { 00:17:48.155 "trtype": "TCP", 00:17:48.155 "adrfam": "IPv4", 00:17:48.155 "traddr": "10.0.0.2", 00:17:48.155 "trsvcid": "4420" 00:17:48.155 }, 00:17:48.155 "peer_address": { 00:17:48.155 "trtype": "TCP", 00:17:48.155 "adrfam": "IPv4", 00:17:48.155 "traddr": "10.0.0.1", 00:17:48.155 "trsvcid": "53464" 00:17:48.155 }, 00:17:48.155 "auth": { 00:17:48.155 "state": "completed", 00:17:48.155 "digest": "sha512", 00:17:48.155 "dhgroup": "ffdhe6144" 00:17:48.155 } 00:17:48.155 } 00:17:48.155 ]' 00:17:48.155 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.155 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.155 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.415 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.415 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.415 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.415 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.415 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.415 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:48.415 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.362 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.623 00:17:49.623 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.623 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.623 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.884 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.884 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.884 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.884 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.884 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.884 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.884 { 00:17:49.884 "cntlid": 135, 00:17:49.884 "qid": 0, 00:17:49.884 "state": "enabled", 00:17:49.884 "thread": "nvmf_tgt_poll_group_000", 00:17:49.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:49.884 "listen_address": { 00:17:49.884 "trtype": "TCP", 00:17:49.884 "adrfam": "IPv4", 00:17:49.884 "traddr": "10.0.0.2", 00:17:49.884 "trsvcid": "4420" 00:17:49.884 }, 00:17:49.884 "peer_address": { 00:17:49.884 "trtype": "TCP", 00:17:49.884 "adrfam": "IPv4", 00:17:49.884 "traddr": "10.0.0.1", 00:17:49.884 "trsvcid": "53482" 00:17:49.884 }, 00:17:49.884 "auth": { 00:17:49.884 "state": "completed", 00:17:49.884 "digest": "sha512", 00:17:49.884 "dhgroup": "ffdhe6144" 00:17:49.884 } 00:17:49.884 } 00:17:49.884 ]' 00:17:49.884 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.884 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.884 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.145 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.145 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.145 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.145 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.145 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.146 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:50.146 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:51.084 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.084 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.084 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.084 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.084 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.084 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.084 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.084 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:51.084 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.084 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.085 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.655 00:17:51.655 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.655 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.655 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.655 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.655 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.655 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.655 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.655 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.655 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.655 { 00:17:51.655 "cntlid": 137, 00:17:51.655 "qid": 0, 00:17:51.655 "state": "enabled", 00:17:51.655 "thread": "nvmf_tgt_poll_group_000", 00:17:51.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:51.655 "listen_address": { 00:17:51.655 "trtype": "TCP", 00:17:51.655 "adrfam": "IPv4", 00:17:51.655 "traddr": "10.0.0.2", 00:17:51.655 "trsvcid": "4420" 00:17:51.655 }, 00:17:51.655 "peer_address": { 00:17:51.655 "trtype": "TCP", 00:17:51.655 "adrfam": "IPv4", 00:17:51.655 "traddr": "10.0.0.1", 00:17:51.655 "trsvcid": "44066" 00:17:51.655 }, 00:17:51.655 "auth": { 00:17:51.655 "state": "completed", 00:17:51.655 "digest": "sha512", 00:17:51.655 "dhgroup": "ffdhe8192" 00:17:51.655 } 00:17:51.655 } 00:17:51.655 ]' 00:17:51.655 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.916 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.916 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.916 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.916 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.916 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.916 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.916 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.176 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:52.176 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:52.747 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.747 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.747 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.747 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.747 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.747 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.747 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:52.747 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.007 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.578 00:17:53.578 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.579 { 00:17:53.579 "cntlid": 139, 00:17:53.579 "qid": 0, 00:17:53.579 "state": "enabled", 00:17:53.579 "thread": "nvmf_tgt_poll_group_000", 00:17:53.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:53.579 "listen_address": { 00:17:53.579 "trtype": "TCP", 00:17:53.579 "adrfam": "IPv4", 00:17:53.579 "traddr": "10.0.0.2", 00:17:53.579 "trsvcid": "4420" 00:17:53.579 }, 00:17:53.579 "peer_address": { 00:17:53.579 "trtype": "TCP", 00:17:53.579 "adrfam": "IPv4", 00:17:53.579 "traddr": "10.0.0.1", 00:17:53.579 "trsvcid": "44106" 00:17:53.579 }, 00:17:53.579 "auth": { 00:17:53.579 "state": "completed", 00:17:53.579 "digest": "sha512", 00:17:53.579 "dhgroup": "ffdhe8192" 00:17:53.579 } 00:17:53.579 } 00:17:53.579 ]' 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.579 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.839 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.839 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.839 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.839 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.839 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:53.839 18:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: --dhchap-ctrl-secret DHHC-1:02:YzBjZjlkMzEwM2MzOTdlNzkzNGMwOGY5ZjYwNTVjNDVmMzk0MWUxYjcwNDEzNmRlNP0iBA==: 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.792 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.793 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:54.793 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.793 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.793 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.793 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.793 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.793 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.793 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.793 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.793 18:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.363 00:17:55.363 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.363 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.363 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.363 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.363 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.363 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.363 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.363 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.363 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.363 { 00:17:55.363 "cntlid": 141, 00:17:55.363 "qid": 0, 00:17:55.363 "state": "enabled", 00:17:55.363 "thread": "nvmf_tgt_poll_group_000", 00:17:55.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:55.363 "listen_address": { 00:17:55.363 "trtype": "TCP", 00:17:55.363 "adrfam": "IPv4", 00:17:55.363 "traddr": "10.0.0.2", 00:17:55.363 "trsvcid": "4420" 00:17:55.363 }, 00:17:55.363 "peer_address": { 00:17:55.363 "trtype": "TCP", 00:17:55.363 "adrfam": "IPv4", 00:17:55.363 "traddr": "10.0.0.1", 00:17:55.363 "trsvcid": "44132" 00:17:55.363 }, 00:17:55.363 "auth": { 00:17:55.363 "state": "completed", 00:17:55.363 "digest": "sha512", 00:17:55.363 "dhgroup": "ffdhe8192" 00:17:55.363 } 00:17:55.363 } 00:17:55.363 ]' 00:17:55.363 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.623 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.623 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.623 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.623 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.623 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.623 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.624 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.883 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:55.884 18:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:01:NTJjYzA5NDg0N2JlYjVhMzdlOTQyNWVmY2Q2MDA5MmYN63kP: 00:17:56.454 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.454 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:56.454 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.454 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.454 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.454 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.454 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.454 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.715 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.976 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.236 { 00:17:57.236 "cntlid": 143, 00:17:57.236 "qid": 0, 00:17:57.236 "state": "enabled", 00:17:57.236 "thread": "nvmf_tgt_poll_group_000", 00:17:57.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:57.236 "listen_address": { 00:17:57.236 "trtype": "TCP", 00:17:57.236 "adrfam": "IPv4", 00:17:57.236 "traddr": "10.0.0.2", 00:17:57.236 "trsvcid": "4420" 00:17:57.236 }, 00:17:57.236 "peer_address": { 00:17:57.236 "trtype": "TCP", 00:17:57.236 "adrfam": "IPv4", 00:17:57.236 "traddr": "10.0.0.1", 00:17:57.236 "trsvcid": "44164" 00:17:57.236 }, 00:17:57.236 "auth": { 00:17:57.236 "state": "completed", 00:17:57.236 "digest": "sha512", 00:17:57.236 "dhgroup": "ffdhe8192" 00:17:57.236 } 00:17:57.236 } 00:17:57.236 ]' 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.236 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.520 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.520 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.520 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.520 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.520 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.520 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:57.520 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:17:58.139 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.415 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.002 00:17:59.002 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.002 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.002 18:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.262 { 00:17:59.262 "cntlid": 145, 00:17:59.262 "qid": 0, 00:17:59.262 "state": "enabled", 00:17:59.262 "thread": "nvmf_tgt_poll_group_000", 00:17:59.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:59.262 "listen_address": { 00:17:59.262 "trtype": "TCP", 00:17:59.262 "adrfam": "IPv4", 00:17:59.262 "traddr": "10.0.0.2", 00:17:59.262 "trsvcid": "4420" 00:17:59.262 }, 00:17:59.262 "peer_address": { 00:17:59.262 "trtype": "TCP", 00:17:59.262 "adrfam": "IPv4", 00:17:59.262 "traddr": "10.0.0.1", 00:17:59.262 "trsvcid": "44192" 00:17:59.262 }, 00:17:59.262 "auth": { 00:17:59.262 "state": "completed", 00:17:59.262 "digest": "sha512", 00:17:59.262 "dhgroup": "ffdhe8192" 00:17:59.262 } 00:17:59.262 } 00:17:59.262 ]' 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.262 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.523 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:17:59.523 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YjUxYmZkMDQzZGUxZTA1NDRhZWQ3MTI4ODU4ZjIxYTIxYjUxYmQzZmZmNmYwOTQ5ztVpYw==: --dhchap-ctrl-secret DHHC-1:03:MmRmNTFkZmM1NWRkOGM5NTE5YmIzYzBhZGQ5MjA1NjQ4NzJjYjlmNGM2ZGEwM2ZiNzA3ODAwMGE3OTFhMmEzNb0Lbww=: 00:18:00.092 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:00.093 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:00.664 request: 00:18:00.664 { 00:18:00.664 "name": "nvme0", 00:18:00.664 "trtype": "tcp", 00:18:00.664 "traddr": "10.0.0.2", 00:18:00.664 "adrfam": "ipv4", 00:18:00.664 "trsvcid": "4420", 00:18:00.664 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:00.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:00.664 "prchk_reftag": false, 00:18:00.664 "prchk_guard": false, 00:18:00.664 "hdgst": false, 00:18:00.664 "ddgst": false, 00:18:00.664 "dhchap_key": "key2", 00:18:00.664 "allow_unrecognized_csi": false, 00:18:00.664 "method": "bdev_nvme_attach_controller", 00:18:00.664 "req_id": 1 00:18:00.664 } 00:18:00.664 Got JSON-RPC error response 00:18:00.664 response: 00:18:00.664 { 00:18:00.664 "code": -5, 00:18:00.664 "message": "Input/output error" 00:18:00.664 } 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.664 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:01.235 request: 00:18:01.235 { 00:18:01.235 "name": "nvme0", 00:18:01.235 "trtype": "tcp", 00:18:01.235 "traddr": "10.0.0.2", 00:18:01.235 "adrfam": "ipv4", 00:18:01.235 "trsvcid": "4420", 00:18:01.235 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:01.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:01.235 "prchk_reftag": false, 00:18:01.235 "prchk_guard": false, 00:18:01.235 "hdgst": false, 00:18:01.235 "ddgst": false, 00:18:01.235 "dhchap_key": "key1", 00:18:01.235 "dhchap_ctrlr_key": "ckey2", 00:18:01.235 "allow_unrecognized_csi": false, 00:18:01.235 "method": "bdev_nvme_attach_controller", 00:18:01.235 "req_id": 1 00:18:01.235 } 00:18:01.235 Got JSON-RPC error response 00:18:01.235 response: 00:18:01.235 { 00:18:01.235 "code": -5, 00:18:01.235 "message": "Input/output error" 00:18:01.235 } 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.235 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.496 request: 00:18:01.496 { 00:18:01.496 "name": "nvme0", 00:18:01.496 "trtype": "tcp", 00:18:01.496 "traddr": "10.0.0.2", 00:18:01.496 "adrfam": "ipv4", 00:18:01.496 "trsvcid": "4420", 00:18:01.496 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:01.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:01.496 "prchk_reftag": false, 00:18:01.496 "prchk_guard": false, 00:18:01.496 "hdgst": false, 00:18:01.496 "ddgst": false, 00:18:01.496 "dhchap_key": "key1", 00:18:01.496 "dhchap_ctrlr_key": "ckey1", 00:18:01.496 "allow_unrecognized_csi": false, 00:18:01.496 "method": "bdev_nvme_attach_controller", 00:18:01.496 "req_id": 1 00:18:01.496 } 00:18:01.496 Got JSON-RPC error response 00:18:01.496 response: 00:18:01.496 { 00:18:01.496 "code": -5, 00:18:01.496 "message": "Input/output error" 00:18:01.496 } 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1196461 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1196461 ']' 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1196461 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:01.496 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1196461 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1196461' 00:18:01.758 killing process with pid 1196461 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1196461 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1196461 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1222756 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1222756 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1222756 ']' 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.758 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1222756 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1222756 ']' 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.700 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.961 null0 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Fz0 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Jxx ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jxx 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IIy 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Qni ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qni 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2gf 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.kbF ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kbF 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.CCS 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.961 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.902 nvme0n1 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.902 { 00:18:03.902 "cntlid": 1, 00:18:03.902 "qid": 0, 00:18:03.902 "state": "enabled", 00:18:03.902 "thread": "nvmf_tgt_poll_group_000", 00:18:03.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:03.902 "listen_address": { 00:18:03.902 "trtype": "TCP", 00:18:03.902 "adrfam": "IPv4", 00:18:03.902 "traddr": "10.0.0.2", 00:18:03.902 "trsvcid": "4420" 00:18:03.902 }, 00:18:03.902 "peer_address": { 00:18:03.902 "trtype": "TCP", 00:18:03.902 "adrfam": "IPv4", 00:18:03.902 "traddr": "10.0.0.1", 00:18:03.902 "trsvcid": "58834" 00:18:03.902 }, 00:18:03.902 "auth": { 00:18:03.902 "state": "completed", 00:18:03.902 "digest": "sha512", 00:18:03.902 "dhgroup": "ffdhe8192" 00:18:03.902 } 00:18:03.902 } 00:18:03.902 ]' 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.902 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.162 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.162 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.162 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.162 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.162 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.422 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:18:04.422 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:04.991 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.251 request: 00:18:05.251 { 00:18:05.251 "name": "nvme0", 00:18:05.251 "trtype": "tcp", 00:18:05.251 "traddr": "10.0.0.2", 00:18:05.251 "adrfam": "ipv4", 00:18:05.251 "trsvcid": "4420", 00:18:05.251 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:05.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:05.251 "prchk_reftag": false, 00:18:05.251 "prchk_guard": false, 00:18:05.251 "hdgst": false, 00:18:05.251 "ddgst": false, 00:18:05.251 "dhchap_key": "key3", 00:18:05.251 "allow_unrecognized_csi": false, 00:18:05.251 "method": "bdev_nvme_attach_controller", 00:18:05.251 "req_id": 1 00:18:05.251 } 00:18:05.251 Got JSON-RPC error response 00:18:05.251 response: 00:18:05.251 { 00:18:05.251 "code": -5, 00:18:05.251 "message": "Input/output error" 00:18:05.251 } 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:05.251 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:05.511 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:05.511 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:05.511 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:05.511 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:05.511 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.511 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:05.511 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.511 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:05.511 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.511 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.771 request: 00:18:05.771 { 00:18:05.771 "name": "nvme0", 00:18:05.771 "trtype": "tcp", 00:18:05.771 "traddr": "10.0.0.2", 00:18:05.771 "adrfam": "ipv4", 00:18:05.771 "trsvcid": "4420", 00:18:05.771 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:05.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:05.771 "prchk_reftag": false, 00:18:05.771 "prchk_guard": false, 00:18:05.771 "hdgst": false, 00:18:05.771 "ddgst": false, 00:18:05.771 "dhchap_key": "key3", 00:18:05.771 "allow_unrecognized_csi": false, 00:18:05.771 "method": "bdev_nvme_attach_controller", 00:18:05.771 "req_id": 1 00:18:05.771 } 00:18:05.771 Got JSON-RPC error response 00:18:05.771 response: 00:18:05.771 { 00:18:05.771 "code": -5, 00:18:05.771 "message": "Input/output error" 00:18:05.771 } 00:18:05.771 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:05.771 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:05.771 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:05.771 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:05.771 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:05.771 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:05.771 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:05.771 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:05.771 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:05.771 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:06.031 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:06.291 request: 00:18:06.291 { 00:18:06.291 "name": "nvme0", 00:18:06.291 "trtype": "tcp", 00:18:06.291 "traddr": "10.0.0.2", 00:18:06.291 "adrfam": "ipv4", 00:18:06.291 "trsvcid": "4420", 00:18:06.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:06.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:06.291 "prchk_reftag": false, 00:18:06.291 "prchk_guard": false, 00:18:06.291 "hdgst": false, 00:18:06.291 "ddgst": false, 00:18:06.291 "dhchap_key": "key0", 00:18:06.291 "dhchap_ctrlr_key": "key1", 00:18:06.291 "allow_unrecognized_csi": false, 00:18:06.291 "method": "bdev_nvme_attach_controller", 00:18:06.291 "req_id": 1 00:18:06.291 } 00:18:06.291 Got JSON-RPC error response 00:18:06.291 response: 00:18:06.291 { 00:18:06.291 "code": -5, 00:18:06.291 "message": "Input/output error" 00:18:06.291 } 00:18:06.291 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:06.291 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:06.291 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:06.291 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:06.291 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:06.291 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:06.291 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:06.551 nvme0n1 00:18:06.551 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:06.551 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:06.551 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.810 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.810 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.810 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.810 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:06.810 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.810 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.810 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.810 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:06.810 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:06.810 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:07.757 nvme0n1 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:07.757 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.016 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.016 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:18:08.016 18:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: --dhchap-ctrl-secret DHHC-1:03:ZTMzODhmNDQ2NmEwM2E3MzY3NmJhOGJlNTcwMTBkZDQ2YjBmZWUxNzc1ODY0ZTg2YTdmZDk2YjcwZWRmZjAzNqe6hw8=: 00:18:08.585 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:08.585 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:08.585 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:08.585 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:08.585 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:08.585 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:08.585 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:08.585 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.585 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.845 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:08.845 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:08.845 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:08.845 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:08.845 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.845 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:08.845 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.845 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:08.845 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:08.845 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:09.415 request: 00:18:09.415 { 00:18:09.415 "name": "nvme0", 00:18:09.415 "trtype": "tcp", 00:18:09.415 "traddr": "10.0.0.2", 00:18:09.415 "adrfam": "ipv4", 00:18:09.415 "trsvcid": "4420", 00:18:09.415 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:09.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:09.415 "prchk_reftag": false, 00:18:09.415 "prchk_guard": false, 00:18:09.415 "hdgst": false, 00:18:09.415 "ddgst": false, 00:18:09.415 "dhchap_key": "key1", 00:18:09.415 "allow_unrecognized_csi": false, 00:18:09.415 "method": "bdev_nvme_attach_controller", 00:18:09.415 "req_id": 1 00:18:09.415 } 00:18:09.415 Got JSON-RPC error response 00:18:09.415 response: 00:18:09.415 { 00:18:09.415 "code": -5, 00:18:09.415 "message": "Input/output error" 00:18:09.415 } 00:18:09.415 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:09.415 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:09.415 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:09.415 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:09.415 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:09.415 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:09.415 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:09.983 nvme0n1 00:18:09.983 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:09.983 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:09.983 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.243 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.243 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.243 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.503 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.503 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.503 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.503 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.503 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:10.503 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:10.503 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:10.763 nvme0n1 00:18:10.763 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:10.763 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:10.763 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.023 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.023 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.023 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.023 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:11.023 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.023 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.023 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.023 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: '' 2s 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: ]] 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODkzNGIxYTUxNjU2NTY4YTc1N2Y2MjNiN2Y0MWRiZmQxpucd: 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:11.024 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:13.561 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:13.561 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: 2s 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: ]] 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2Q4NjBhZDdiZGI0ODg0NDA5YWY5NDIzODcxY2Y3Zjg2ZDVkNmFlODVlZGFjNTM5ro9fCw==: 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:13.562 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:15.470 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:16.040 nvme0n1 00:18:16.040 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:16.040 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.040 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.040 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.040 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:16.040 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:16.609 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:16.609 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:16.609 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.609 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.609 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.609 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.609 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.609 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.609 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:16.609 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:16.869 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:16.869 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.869 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:16.869 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.869 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:16.869 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.869 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.129 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.129 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:17.129 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:17.129 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:17.129 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:17.129 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.129 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:17.129 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.129 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:17.129 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:17.389 request: 00:18:17.389 { 00:18:17.389 "name": "nvme0", 00:18:17.389 "dhchap_key": "key1", 00:18:17.389 "dhchap_ctrlr_key": "key3", 00:18:17.389 "method": "bdev_nvme_set_keys", 00:18:17.389 "req_id": 1 00:18:17.389 } 00:18:17.389 Got JSON-RPC error response 00:18:17.389 response: 00:18:17.389 { 00:18:17.389 "code": -13, 00:18:17.389 "message": "Permission denied" 00:18:17.389 } 00:18:17.389 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:17.389 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.389 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.389 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.389 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:17.389 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:17.389 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.648 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:17.648 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:18.586 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:18.586 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:18.586 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.847 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:18.847 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:18.847 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.847 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.847 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.847 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:18.847 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:18.847 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:19.791 nvme0n1 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:19.791 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:20.051 request: 00:18:20.051 { 00:18:20.051 "name": "nvme0", 00:18:20.051 "dhchap_key": "key2", 00:18:20.051 "dhchap_ctrlr_key": "key0", 00:18:20.051 "method": "bdev_nvme_set_keys", 00:18:20.051 "req_id": 1 00:18:20.051 } 00:18:20.051 Got JSON-RPC error response 00:18:20.051 response: 00:18:20.051 { 00:18:20.051 "code": -13, 00:18:20.051 "message": "Permission denied" 00:18:20.051 } 00:18:20.051 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:20.051 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.051 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.051 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.051 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:20.051 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:20.051 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.311 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:20.311 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:21.251 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:21.251 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:21.251 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1196805 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1196805 ']' 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1196805 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1196805 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1196805' 00:18:21.511 killing process with pid 1196805 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1196805 00:18:21.511 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1196805 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:21.771 rmmod nvme_tcp 00:18:21.771 rmmod nvme_fabrics 00:18:21.771 rmmod nvme_keyring 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1222756 ']' 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1222756 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1222756 ']' 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1222756 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1222756 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1222756' 00:18:21.771 killing process with pid 1222756 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1222756 00:18:21.771 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1222756 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.032 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.942 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:23.942 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Fz0 /tmp/spdk.key-sha256.IIy /tmp/spdk.key-sha384.2gf /tmp/spdk.key-sha512.CCS /tmp/spdk.key-sha512.Jxx /tmp/spdk.key-sha384.Qni /tmp/spdk.key-sha256.kbF '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:23.942 00:18:23.942 real 2m37.003s 00:18:23.942 user 5m52.585s 00:18:23.942 sys 0m24.862s 00:18:23.942 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:23.942 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.942 ************************************ 00:18:23.942 END TEST nvmf_auth_target 00:18:23.942 ************************************ 00:18:23.942 18:34:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:23.942 18:34:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:23.942 18:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:23.942 18:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:23.942 18:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.203 ************************************ 00:18:24.203 START TEST nvmf_bdevio_no_huge 00:18:24.203 ************************************ 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:24.203 * Looking for test storage... 00:18:24.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.203 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.204 --rc genhtml_branch_coverage=1 00:18:24.204 --rc genhtml_function_coverage=1 00:18:24.204 --rc genhtml_legend=1 00:18:24.204 --rc geninfo_all_blocks=1 00:18:24.204 --rc geninfo_unexecuted_blocks=1 00:18:24.204 00:18:24.204 ' 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.204 --rc genhtml_branch_coverage=1 00:18:24.204 --rc genhtml_function_coverage=1 00:18:24.204 --rc genhtml_legend=1 00:18:24.204 --rc geninfo_all_blocks=1 00:18:24.204 --rc geninfo_unexecuted_blocks=1 00:18:24.204 00:18:24.204 ' 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.204 --rc genhtml_branch_coverage=1 00:18:24.204 --rc genhtml_function_coverage=1 00:18:24.204 --rc genhtml_legend=1 00:18:24.204 --rc geninfo_all_blocks=1 00:18:24.204 --rc geninfo_unexecuted_blocks=1 00:18:24.204 00:18:24.204 ' 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.204 --rc genhtml_branch_coverage=1 00:18:24.204 --rc genhtml_function_coverage=1 00:18:24.204 --rc genhtml_legend=1 00:18:24.204 --rc geninfo_all_blocks=1 00:18:24.204 --rc geninfo_unexecuted_blocks=1 00:18:24.204 00:18:24.204 ' 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.204 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.464 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.465 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:24.465 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:24.465 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:24.465 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.601 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.601 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:32.602 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:32.602 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:32.602 Found net devices under 0000:31:00.0: cvl_0_0 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:32.602 Found net devices under 0000:31:00.1: cvl_0_1 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:32.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:18:32.602 00:18:32.602 --- 10.0.0.2 ping statistics --- 00:18:32.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.602 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:18:32.602 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:18:32.602 00:18:32.602 --- 10.0.0.1 ping statistics --- 00:18:32.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.603 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1231073 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1231073 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1231073 ']' 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.603 18:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.603 [2024-10-08 18:34:26.028212] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:18:32.603 [2024-10-08 18:34:26.028281] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:32.603 [2024-10-08 18:34:26.127423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.603 [2024-10-08 18:34:26.235432] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.603 [2024-10-08 18:34:26.235486] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.603 [2024-10-08 18:34:26.235495] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.603 [2024-10-08 18:34:26.235502] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.603 [2024-10-08 18:34:26.235508] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.603 [2024-10-08 18:34:26.237042] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:18:32.603 [2024-10-08 18:34:26.237213] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:18:32.603 [2024-10-08 18:34:26.237372] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:18:32.603 [2024-10-08 18:34:26.237372] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.863 [2024-10-08 18:34:26.899393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.863 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.124 Malloc0 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.124 [2024-10-08 18:34:26.953320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:33.124 { 00:18:33.124 "params": { 00:18:33.124 "name": "Nvme$subsystem", 00:18:33.124 "trtype": "$TEST_TRANSPORT", 00:18:33.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.124 "adrfam": "ipv4", 00:18:33.124 "trsvcid": "$NVMF_PORT", 00:18:33.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.124 "hdgst": ${hdgst:-false}, 00:18:33.124 "ddgst": ${ddgst:-false} 00:18:33.124 }, 00:18:33.124 "method": "bdev_nvme_attach_controller" 00:18:33.124 } 00:18:33.124 EOF 00:18:33.124 )") 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:18:33.124 18:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:33.124 "params": { 00:18:33.124 "name": "Nvme1", 00:18:33.124 "trtype": "tcp", 00:18:33.124 "traddr": "10.0.0.2", 00:18:33.124 "adrfam": "ipv4", 00:18:33.124 "trsvcid": "4420", 00:18:33.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.124 "hdgst": false, 00:18:33.124 "ddgst": false 00:18:33.124 }, 00:18:33.124 "method": "bdev_nvme_attach_controller" 00:18:33.124 }' 00:18:33.124 [2024-10-08 18:34:27.010783] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:18:33.124 [2024-10-08 18:34:27.010853] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1231330 ] 00:18:33.124 [2024-10-08 18:34:27.100830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:33.384 [2024-10-08 18:34:27.207227] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.384 [2024-10-08 18:34:27.207390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.384 [2024-10-08 18:34:27.207390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.645 I/O targets: 00:18:33.645 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:33.645 00:18:33.645 00:18:33.645 CUnit - A unit testing framework for C - Version 2.1-3 00:18:33.645 http://cunit.sourceforge.net/ 00:18:33.645 00:18:33.645 00:18:33.645 Suite: bdevio tests on: Nvme1n1 00:18:33.645 Test: blockdev write read block ...passed 00:18:33.645 Test: blockdev write zeroes read block ...passed 00:18:33.645 Test: blockdev write zeroes read no split ...passed 00:18:33.645 Test: blockdev write zeroes read split ...passed 00:18:33.645 Test: blockdev write zeroes read split partial ...passed 00:18:33.645 Test: blockdev reset ...[2024-10-08 18:34:27.604166] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.645 [2024-10-08 18:34:27.604280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147e0b0 (9): Bad file descriptor 00:18:33.645 [2024-10-08 18:34:27.632955] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:33.645 passed 00:18:33.645 Test: blockdev write read 8 blocks ...passed 00:18:33.645 Test: blockdev write read size > 128k ...passed 00:18:33.645 Test: blockdev write read invalid size ...passed 00:18:33.645 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:33.645 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:33.645 Test: blockdev write read max offset ...passed 00:18:33.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:33.906 Test: blockdev writev readv 8 blocks ...passed 00:18:33.906 Test: blockdev writev readv 30 x 1block ...passed 00:18:33.906 Test: blockdev writev readv block ...passed 00:18:33.906 Test: blockdev writev readv size > 128k ...passed 00:18:33.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:33.906 Test: blockdev comparev and writev ...[2024-10-08 18:34:27.860020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.906 [2024-10-08 18:34:27.860070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.906 [2024-10-08 18:34:27.860087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.906 [2024-10-08 18:34:27.860096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.906 [2024-10-08 18:34:27.860609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.906 [2024-10-08 18:34:27.860625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:33.906 [2024-10-08 18:34:27.860639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.906 [2024-10-08 18:34:27.860647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:33.906 [2024-10-08 18:34:27.861247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.906 [2024-10-08 18:34:27.861259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:33.906 [2024-10-08 18:34:27.861273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.906 [2024-10-08 18:34:27.861282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:33.906 [2024-10-08 18:34:27.861786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.906 [2024-10-08 18:34:27.861798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:33.906 [2024-10-08 18:34:27.861812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.906 [2024-10-08 18:34:27.861820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:33.906 passed 00:18:33.906 Test: blockdev nvme passthru rw ...passed 00:18:33.906 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:34:27.946926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:33.906 [2024-10-08 18:34:27.946943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:33.906 [2024-10-08 18:34:27.947339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:33.906 [2024-10-08 18:34:27.947352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:33.906 [2024-10-08 18:34:27.947737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:33.906 [2024-10-08 18:34:27.947748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:33.906 [2024-10-08 18:34:27.948149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:33.906 [2024-10-08 18:34:27.948161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:33.906 passed 00:18:34.166 Test: blockdev nvme admin passthru ...passed 00:18:34.166 Test: blockdev copy ...passed 00:18:34.166 00:18:34.166 Run Summary: Type Total Ran Passed Failed Inactive 00:18:34.166 suites 1 1 n/a 0 0 00:18:34.166 tests 23 23 23 0 0 00:18:34.166 asserts 152 152 152 0 n/a 00:18:34.166 00:18:34.166 Elapsed time = 1.095 seconds 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:34.426 rmmod nvme_tcp 00:18:34.426 rmmod nvme_fabrics 00:18:34.426 rmmod nvme_keyring 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1231073 ']' 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1231073 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1231073 ']' 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1231073 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.426 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1231073 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1231073' 00:18:34.686 killing process with pid 1231073 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1231073 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1231073 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.686 18:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.228 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:37.228 00:18:37.228 real 0m12.786s 00:18:37.228 user 0m14.325s 00:18:37.228 sys 0m6.896s 00:18:37.228 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.228 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.228 ************************************ 00:18:37.228 END TEST nvmf_bdevio_no_huge 00:18:37.228 ************************************ 00:18:37.228 18:34:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:37.228 18:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:37.228 18:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.228 18:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.228 ************************************ 00:18:37.228 START TEST nvmf_tls 00:18:37.228 ************************************ 00:18:37.228 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:37.228 * Looking for test storage... 00:18:37.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:37.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.228 --rc genhtml_branch_coverage=1 00:18:37.228 --rc genhtml_function_coverage=1 00:18:37.228 --rc genhtml_legend=1 00:18:37.228 --rc geninfo_all_blocks=1 00:18:37.228 --rc geninfo_unexecuted_blocks=1 00:18:37.228 00:18:37.228 ' 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:37.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.228 --rc genhtml_branch_coverage=1 00:18:37.228 --rc genhtml_function_coverage=1 00:18:37.228 --rc genhtml_legend=1 00:18:37.228 --rc geninfo_all_blocks=1 00:18:37.228 --rc geninfo_unexecuted_blocks=1 00:18:37.228 00:18:37.228 ' 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:37.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.228 --rc genhtml_branch_coverage=1 00:18:37.228 --rc genhtml_function_coverage=1 00:18:37.228 --rc genhtml_legend=1 00:18:37.228 --rc geninfo_all_blocks=1 00:18:37.228 --rc geninfo_unexecuted_blocks=1 00:18:37.228 00:18:37.228 ' 00:18:37.228 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:37.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.228 --rc genhtml_branch_coverage=1 00:18:37.228 --rc genhtml_function_coverage=1 00:18:37.228 --rc genhtml_legend=1 00:18:37.228 --rc geninfo_all_blocks=1 00:18:37.228 --rc geninfo_unexecuted_blocks=1 00:18:37.228 00:18:37.228 ' 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:37.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:37.229 18:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.364 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:45.365 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:45.365 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:45.365 Found net devices under 0000:31:00.0: cvl_0_0 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:45.365 Found net devices under 0000:31:00.1: cvl_0_1 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:45.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:18:45.365 00:18:45.365 --- 10.0.0.2 ping statistics --- 00:18:45.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.365 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:18:45.365 00:18:45.365 --- 10.0.0.1 ping statistics --- 00:18:45.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.365 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1235926 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1235926 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1235926 ']' 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.365 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.365 [2024-10-08 18:34:38.897905] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:18:45.365 [2024-10-08 18:34:38.897969] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.365 [2024-10-08 18:34:38.991463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.365 [2024-10-08 18:34:39.084701] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.365 [2024-10-08 18:34:39.084760] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.365 [2024-10-08 18:34:39.084768] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.366 [2024-10-08 18:34:39.084775] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.366 [2024-10-08 18:34:39.084781] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.366 [2024-10-08 18:34:39.085608] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.936 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.936 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:45.936 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:45.936 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:45.936 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.936 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.936 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:45.936 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:45.936 true 00:18:45.936 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.936 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:46.196 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:46.196 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:46.196 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:46.457 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:46.457 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:46.457 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:46.457 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:46.457 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:46.717 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:46.717 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:46.977 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:46.977 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:46.977 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:46.977 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:47.237 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:47.237 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:47.237 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:47.237 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:47.237 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:47.496 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:47.496 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:47.496 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:47.756 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:48.016 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:48.016 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:48.016 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.0TxjBBIGYW 00:18:48.016 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:48.016 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.04WOmOcDGf 00:18:48.016 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:48.017 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:48.017 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.0TxjBBIGYW 00:18:48.017 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.04WOmOcDGf 00:18:48.017 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:48.017 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:48.277 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.0TxjBBIGYW 00:18:48.277 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0TxjBBIGYW 00:18:48.277 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:48.537 [2024-10-08 18:34:42.399361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.537 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:48.537 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:48.797 [2024-10-08 18:34:42.704095] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.797 [2024-10-08 18:34:42.704288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.797 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.057 malloc0 00:18:49.057 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.057 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0TxjBBIGYW 00:18:49.316 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:49.577 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.0TxjBBIGYW 00:18:59.564 Initializing NVMe Controllers 00:18:59.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:59.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:59.564 Initialization complete. Launching workers. 00:18:59.564 ======================================================== 00:18:59.564 Latency(us) 00:18:59.564 Device Information : IOPS MiB/s Average min max 00:18:59.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18555.79 72.48 3449.28 1045.04 4067.95 00:18:59.564 ======================================================== 00:18:59.564 Total : 18555.79 72.48 3449.28 1045.04 4067.95 00:18:59.564 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0TxjBBIGYW 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0TxjBBIGYW 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1238798 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1238798 /var/tmp/bdevperf.sock 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1238798 ']' 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.564 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.564 [2024-10-08 18:34:53.543572] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:18:59.564 [2024-10-08 18:34:53.543630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238798 ] 00:18:59.823 [2024-10-08 18:34:53.622123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.824 [2024-10-08 18:34:53.684921] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.393 18:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.393 18:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:00.393 18:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0TxjBBIGYW 00:19:00.653 18:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:00.653 [2024-10-08 18:34:54.632112] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.912 TLSTESTn1 00:19:00.912 18:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:00.912 Running I/O for 10 seconds... 00:19:02.791 3995.00 IOPS, 15.61 MiB/s [2024-10-08T16:34:58.227Z] 4947.50 IOPS, 19.33 MiB/s [2024-10-08T16:34:59.165Z] 5266.67 IOPS, 20.57 MiB/s [2024-10-08T16:35:00.106Z] 5456.00 IOPS, 21.31 MiB/s [2024-10-08T16:35:01.046Z] 5548.20 IOPS, 21.67 MiB/s [2024-10-08T16:35:01.986Z] 5617.17 IOPS, 21.94 MiB/s [2024-10-08T16:35:02.924Z] 5729.43 IOPS, 22.38 MiB/s [2024-10-08T16:35:03.864Z] 5767.75 IOPS, 22.53 MiB/s [2024-10-08T16:35:05.245Z] 5759.33 IOPS, 22.50 MiB/s [2024-10-08T16:35:05.245Z] 5618.20 IOPS, 21.95 MiB/s 00:19:11.188 Latency(us) 00:19:11.188 [2024-10-08T16:35:05.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.188 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:11.188 Verification LBA range: start 0x0 length 0x2000 00:19:11.188 TLSTESTn1 : 10.01 5623.15 21.97 0.00 0.00 22728.48 5297.49 88255.15 00:19:11.188 [2024-10-08T16:35:05.245Z] =================================================================================================================== 00:19:11.188 [2024-10-08T16:35:05.245Z] Total : 5623.15 21.97 0.00 0.00 22728.48 5297.49 88255.15 00:19:11.188 { 00:19:11.188 "results": [ 00:19:11.188 { 00:19:11.188 "job": "TLSTESTn1", 00:19:11.188 "core_mask": "0x4", 00:19:11.188 "workload": "verify", 00:19:11.188 "status": "finished", 00:19:11.188 "verify_range": { 00:19:11.188 "start": 0, 00:19:11.188 "length": 8192 00:19:11.188 }, 00:19:11.188 "queue_depth": 128, 00:19:11.188 "io_size": 4096, 00:19:11.188 "runtime": 10.013956, 00:19:11.188 "iops": 5623.152328610192, 00:19:11.188 "mibps": 21.965438783633562, 00:19:11.188 "io_failed": 0, 00:19:11.188 "io_timeout": 0, 00:19:11.188 "avg_latency_us": 22728.483354762328, 00:19:11.188 "min_latency_us": 5297.493333333333, 00:19:11.188 "max_latency_us": 88255.14666666667 00:19:11.188 } 00:19:11.188 ], 00:19:11.188 "core_count": 1 00:19:11.188 } 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1238798 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1238798 ']' 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1238798 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1238798 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1238798' 00:19:11.188 killing process with pid 1238798 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1238798 00:19:11.188 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.188 00:19:11.188 Latency(us) 00:19:11.188 [2024-10-08T16:35:05.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.188 [2024-10-08T16:35:05.245Z] =================================================================================================================== 00:19:11.188 [2024-10-08T16:35:05.245Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.188 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1238798 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.04WOmOcDGf 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.04WOmOcDGf 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.04WOmOcDGf 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.04WOmOcDGf 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:11.188 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1241052 00:19:11.189 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:11.189 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1241052 /var/tmp/bdevperf.sock 00:19:11.189 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:11.189 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1241052 ']' 00:19:11.189 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.189 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.189 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.189 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.189 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.189 [2024-10-08 18:35:05.120959] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:11.189 [2024-10-08 18:35:05.121023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241052 ] 00:19:11.189 [2024-10-08 18:35:05.197643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.449 [2024-10-08 18:35:05.250229] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:12.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.04WOmOcDGf 00:19:12.018 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.278 [2024-10-08 18:35:06.224385] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.278 [2024-10-08 18:35:06.230289] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:12.278 [2024-10-08 18:35:06.230450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325a20 (107): Transport endpoint is not connected 00:19:12.278 [2024-10-08 18:35:06.231446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325a20 (9): Bad file descriptor 00:19:12.278 [2024-10-08 18:35:06.232448] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:12.278 [2024-10-08 18:35:06.232455] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:12.278 [2024-10-08 18:35:06.232461] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:12.278 [2024-10-08 18:35:06.232469] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:12.278 request: 00:19:12.278 { 00:19:12.278 "name": "TLSTEST", 00:19:12.278 "trtype": "tcp", 00:19:12.278 "traddr": "10.0.0.2", 00:19:12.278 "adrfam": "ipv4", 00:19:12.278 "trsvcid": "4420", 00:19:12.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.278 "prchk_reftag": false, 00:19:12.278 "prchk_guard": false, 00:19:12.278 "hdgst": false, 00:19:12.278 "ddgst": false, 00:19:12.278 "psk": "key0", 00:19:12.278 "allow_unrecognized_csi": false, 00:19:12.278 "method": "bdev_nvme_attach_controller", 00:19:12.278 "req_id": 1 00:19:12.278 } 00:19:12.278 Got JSON-RPC error response 00:19:12.278 response: 00:19:12.278 { 00:19:12.278 "code": -5, 00:19:12.278 "message": "Input/output error" 00:19:12.278 } 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1241052 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1241052 ']' 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1241052 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1241052 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1241052' 00:19:12.278 killing process with pid 1241052 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1241052 00:19:12.278 Received shutdown signal, test time was about 10.000000 seconds 00:19:12.278 00:19:12.278 Latency(us) 00:19:12.278 [2024-10-08T16:35:06.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.278 [2024-10-08T16:35:06.335Z] =================================================================================================================== 00:19:12.278 [2024-10-08T16:35:06.335Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:12.278 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1241052 00:19:12.537 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:12.537 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:12.537 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:12.537 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:12.537 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:12.537 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0TxjBBIGYW 00:19:12.537 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:12.537 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0TxjBBIGYW 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0TxjBBIGYW 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0TxjBBIGYW 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1241218 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1241218 /var/tmp/bdevperf.sock 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1241218 ']' 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:12.538 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.538 [2024-10-08 18:35:06.478798] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:12.538 [2024-10-08 18:35:06.478855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241218 ] 00:19:12.538 [2024-10-08 18:35:06.556951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.797 [2024-10-08 18:35:06.609064] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.368 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:13.368 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:13.368 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0TxjBBIGYW 00:19:13.628 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:13.628 [2024-10-08 18:35:07.607201] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.628 [2024-10-08 18:35:07.613527] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:13.628 [2024-10-08 18:35:07.613546] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:13.628 [2024-10-08 18:35:07.613565] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:13.628 [2024-10-08 18:35:07.614355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3aa20 (107): Transport endpoint is not connected 00:19:13.628 [2024-10-08 18:35:07.615352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3aa20 (9): Bad file descriptor 00:19:13.628 [2024-10-08 18:35:07.616353] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.628 [2024-10-08 18:35:07.616360] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:13.628 [2024-10-08 18:35:07.616365] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:13.628 [2024-10-08 18:35:07.616374] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.628 request: 00:19:13.628 { 00:19:13.628 "name": "TLSTEST", 00:19:13.628 "trtype": "tcp", 00:19:13.628 "traddr": "10.0.0.2", 00:19:13.628 "adrfam": "ipv4", 00:19:13.628 "trsvcid": "4420", 00:19:13.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.628 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:13.628 "prchk_reftag": false, 00:19:13.628 "prchk_guard": false, 00:19:13.628 "hdgst": false, 00:19:13.628 "ddgst": false, 00:19:13.628 "psk": "key0", 00:19:13.628 "allow_unrecognized_csi": false, 00:19:13.628 "method": "bdev_nvme_attach_controller", 00:19:13.628 "req_id": 1 00:19:13.628 } 00:19:13.628 Got JSON-RPC error response 00:19:13.628 response: 00:19:13.628 { 00:19:13.628 "code": -5, 00:19:13.628 "message": "Input/output error" 00:19:13.628 } 00:19:13.628 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1241218 00:19:13.628 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1241218 ']' 00:19:13.628 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1241218 00:19:13.628 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:13.628 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.628 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1241218 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1241218' 00:19:13.888 killing process with pid 1241218 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1241218 00:19:13.888 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.888 00:19:13.888 Latency(us) 00:19:13.888 [2024-10-08T16:35:07.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.888 [2024-10-08T16:35:07.945Z] =================================================================================================================== 00:19:13.888 [2024-10-08T16:35:07.945Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1241218 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0TxjBBIGYW 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0TxjBBIGYW 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:13.888 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0TxjBBIGYW 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0TxjBBIGYW 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1241505 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1241505 /var/tmp/bdevperf.sock 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1241505 ']' 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:13.889 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.889 [2024-10-08 18:35:07.877340] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:13.889 [2024-10-08 18:35:07.877391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241505 ] 00:19:14.149 [2024-10-08 18:35:07.954607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.149 [2024-10-08 18:35:08.005084] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.720 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.720 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:14.720 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0TxjBBIGYW 00:19:14.980 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.980 [2024-10-08 18:35:09.007716] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.980 [2024-10-08 18:35:09.016939] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:14.980 [2024-10-08 18:35:09.016957] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:14.980 [2024-10-08 18:35:09.016984] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:14.980 [2024-10-08 18:35:09.017883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77a20 (107): Transport endpoint is not connected 00:19:14.980 [2024-10-08 18:35:09.018879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77a20 (9): Bad file descriptor 00:19:14.980 [2024-10-08 18:35:09.019880] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:14.980 [2024-10-08 18:35:09.019887] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:14.980 [2024-10-08 18:35:09.019893] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:14.980 [2024-10-08 18:35:09.019901] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:14.980 request: 00:19:14.980 { 00:19:14.980 "name": "TLSTEST", 00:19:14.980 "trtype": "tcp", 00:19:14.980 "traddr": "10.0.0.2", 00:19:14.980 "adrfam": "ipv4", 00:19:14.980 "trsvcid": "4420", 00:19:14.980 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:14.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.980 "prchk_reftag": false, 00:19:14.980 "prchk_guard": false, 00:19:14.980 "hdgst": false, 00:19:14.980 "ddgst": false, 00:19:14.980 "psk": "key0", 00:19:14.980 "allow_unrecognized_csi": false, 00:19:14.980 "method": "bdev_nvme_attach_controller", 00:19:14.980 "req_id": 1 00:19:14.980 } 00:19:14.980 Got JSON-RPC error response 00:19:14.980 response: 00:19:14.980 { 00:19:14.980 "code": -5, 00:19:14.980 "message": "Input/output error" 00:19:14.980 } 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1241505 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1241505 ']' 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1241505 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1241505 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1241505' 00:19:15.240 killing process with pid 1241505 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1241505 00:19:15.240 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.240 00:19:15.240 Latency(us) 00:19:15.240 [2024-10-08T16:35:09.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.240 [2024-10-08T16:35:09.297Z] =================================================================================================================== 00:19:15.240 [2024-10-08T16:35:09.297Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1241505 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1241848 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1241848 /var/tmp/bdevperf.sock 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1241848 ']' 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.240 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.240 [2024-10-08 18:35:09.287395] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:15.240 [2024-10-08 18:35:09.287456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241848 ] 00:19:15.500 [2024-10-08 18:35:09.365282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.500 [2024-10-08 18:35:09.416841] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.071 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.071 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:16.071 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:16.332 [2024-10-08 18:35:10.234709] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:16.332 [2024-10-08 18:35:10.234735] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:16.332 request: 00:19:16.332 { 00:19:16.332 "name": "key0", 00:19:16.332 "path": "", 00:19:16.332 "method": "keyring_file_add_key", 00:19:16.332 "req_id": 1 00:19:16.332 } 00:19:16.332 Got JSON-RPC error response 00:19:16.332 response: 00:19:16.332 { 00:19:16.332 "code": -1, 00:19:16.332 "message": "Operation not permitted" 00:19:16.332 } 00:19:16.332 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:16.598 [2024-10-08 18:35:10.419254] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.598 [2024-10-08 18:35:10.419278] bdev_nvme.c:6494:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:16.598 request: 00:19:16.598 { 00:19:16.598 "name": "TLSTEST", 00:19:16.598 "trtype": "tcp", 00:19:16.598 "traddr": "10.0.0.2", 00:19:16.598 "adrfam": "ipv4", 00:19:16.598 "trsvcid": "4420", 00:19:16.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.598 "prchk_reftag": false, 00:19:16.598 "prchk_guard": false, 00:19:16.598 "hdgst": false, 00:19:16.598 "ddgst": false, 00:19:16.598 "psk": "key0", 00:19:16.598 "allow_unrecognized_csi": false, 00:19:16.598 "method": "bdev_nvme_attach_controller", 00:19:16.598 "req_id": 1 00:19:16.598 } 00:19:16.598 Got JSON-RPC error response 00:19:16.598 response: 00:19:16.598 { 00:19:16.598 "code": -126, 00:19:16.598 "message": "Required key not available" 00:19:16.598 } 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1241848 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1241848 ']' 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1241848 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1241848 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1241848' 00:19:16.598 killing process with pid 1241848 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1241848 00:19:16.598 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.598 00:19:16.598 Latency(us) 00:19:16.598 [2024-10-08T16:35:10.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.598 [2024-10-08T16:35:10.655Z] =================================================================================================================== 00:19:16.598 [2024-10-08T16:35:10.655Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1241848 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1235926 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1235926 ']' 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1235926 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.598 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1235926 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1235926' 00:19:16.911 killing process with pid 1235926 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1235926 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1235926 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ca1e6yuQlX 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ca1e6yuQlX 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1242206 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1242206 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1242206 ']' 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.911 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.911 [2024-10-08 18:35:10.932299] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:16.911 [2024-10-08 18:35:10.932354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.216 [2024-10-08 18:35:11.016919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.216 [2024-10-08 18:35:11.071510] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.216 [2024-10-08 18:35:11.071545] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.216 [2024-10-08 18:35:11.071551] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.216 [2024-10-08 18:35:11.071555] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.216 [2024-10-08 18:35:11.071559] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.216 [2024-10-08 18:35:11.072060] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.866 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.866 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:17.866 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:17.866 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:17.866 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.866 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.866 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ca1e6yuQlX 00:19:17.866 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ca1e6yuQlX 00:19:17.866 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:18.127 [2024-10-08 18:35:11.927159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.127 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:18.127 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:18.386 [2024-10-08 18:35:12.296061] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.386 [2024-10-08 18:35:12.296253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.386 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:18.646 malloc0 00:19:18.646 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:18.646 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ca1e6yuQlX 00:19:18.906 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ca1e6yuQlX 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ca1e6yuQlX 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1242580 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1242580 /var/tmp/bdevperf.sock 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1242580 ']' 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.167 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.167 [2024-10-08 18:35:13.083899] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:19.167 [2024-10-08 18:35:13.083949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242580 ] 00:19:19.167 [2024-10-08 18:35:13.161295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.167 [2024-10-08 18:35:13.213971] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.108 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.108 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:20.108 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ca1e6yuQlX 00:19:20.108 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.368 [2024-10-08 18:35:14.224537] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.368 TLSTESTn1 00:19:20.368 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:20.368 Running I/O for 10 seconds... 00:19:22.694 4651.00 IOPS, 18.17 MiB/s [2024-10-08T16:35:17.690Z] 4810.50 IOPS, 18.79 MiB/s [2024-10-08T16:35:18.628Z] 5279.67 IOPS, 20.62 MiB/s [2024-10-08T16:35:19.567Z] 5401.00 IOPS, 21.10 MiB/s [2024-10-08T16:35:20.507Z] 5397.40 IOPS, 21.08 MiB/s [2024-10-08T16:35:21.446Z] 5335.33 IOPS, 20.84 MiB/s [2024-10-08T16:35:22.830Z] 5302.57 IOPS, 20.71 MiB/s [2024-10-08T16:35:23.769Z] 5425.88 IOPS, 21.19 MiB/s [2024-10-08T16:35:24.711Z] 5420.44 IOPS, 21.17 MiB/s [2024-10-08T16:35:24.711Z] 5294.10 IOPS, 20.68 MiB/s 00:19:30.654 Latency(us) 00:19:30.654 [2024-10-08T16:35:24.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.654 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:30.654 Verification LBA range: start 0x0 length 0x2000 00:19:30.654 TLSTESTn1 : 10.01 5299.30 20.70 0.00 0.00 24121.65 4450.99 24685.23 00:19:30.654 [2024-10-08T16:35:24.711Z] =================================================================================================================== 00:19:30.654 [2024-10-08T16:35:24.711Z] Total : 5299.30 20.70 0.00 0.00 24121.65 4450.99 24685.23 00:19:30.654 { 00:19:30.654 "results": [ 00:19:30.654 { 00:19:30.654 "job": "TLSTESTn1", 00:19:30.654 "core_mask": "0x4", 00:19:30.654 "workload": "verify", 00:19:30.654 "status": "finished", 00:19:30.654 "verify_range": { 00:19:30.654 "start": 0, 00:19:30.654 "length": 8192 00:19:30.654 }, 00:19:30.654 "queue_depth": 128, 00:19:30.654 "io_size": 4096, 00:19:30.654 "runtime": 10.014158, 00:19:30.654 "iops": 5299.297254946447, 00:19:30.654 "mibps": 20.700379902134557, 00:19:30.654 "io_failed": 0, 00:19:30.654 "io_timeout": 0, 00:19:30.654 "avg_latency_us": 24121.650523353685, 00:19:30.654 "min_latency_us": 4450.986666666667, 00:19:30.654 "max_latency_us": 24685.226666666666 00:19:30.654 } 00:19:30.654 ], 00:19:30.654 "core_count": 1 00:19:30.654 } 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1242580 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1242580 ']' 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1242580 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1242580 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1242580' 00:19:30.654 killing process with pid 1242580 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1242580 00:19:30.654 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.654 00:19:30.654 Latency(us) 00:19:30.654 [2024-10-08T16:35:24.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.654 [2024-10-08T16:35:24.711Z] =================================================================================================================== 00:19:30.654 [2024-10-08T16:35:24.711Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1242580 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ca1e6yuQlX 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ca1e6yuQlX 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ca1e6yuQlX 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:30.654 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ca1e6yuQlX 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ca1e6yuQlX 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1244919 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1244919 /var/tmp/bdevperf.sock 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1244919 ']' 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.655 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.655 [2024-10-08 18:35:24.703621] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:30.655 [2024-10-08 18:35:24.703675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244919 ] 00:19:30.915 [2024-10-08 18:35:24.780701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.915 [2024-10-08 18:35:24.831407] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.485 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.485 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.485 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ca1e6yuQlX 00:19:31.746 [2024-10-08 18:35:25.661185] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ca1e6yuQlX': 0100666 00:19:31.746 [2024-10-08 18:35:25.661216] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:31.746 request: 00:19:31.746 { 00:19:31.746 "name": "key0", 00:19:31.746 "path": "/tmp/tmp.ca1e6yuQlX", 00:19:31.746 "method": "keyring_file_add_key", 00:19:31.747 "req_id": 1 00:19:31.747 } 00:19:31.747 Got JSON-RPC error response 00:19:31.747 response: 00:19:31.747 { 00:19:31.747 "code": -1, 00:19:31.747 "message": "Operation not permitted" 00:19:31.747 } 00:19:31.747 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.008 [2024-10-08 18:35:25.841708] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.008 [2024-10-08 18:35:25.841732] bdev_nvme.c:6494:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:32.008 request: 00:19:32.008 { 00:19:32.008 "name": "TLSTEST", 00:19:32.008 "trtype": "tcp", 00:19:32.008 "traddr": "10.0.0.2", 00:19:32.008 "adrfam": "ipv4", 00:19:32.008 "trsvcid": "4420", 00:19:32.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.008 "prchk_reftag": false, 00:19:32.008 "prchk_guard": false, 00:19:32.008 "hdgst": false, 00:19:32.008 "ddgst": false, 00:19:32.008 "psk": "key0", 00:19:32.008 "allow_unrecognized_csi": false, 00:19:32.008 "method": "bdev_nvme_attach_controller", 00:19:32.008 "req_id": 1 00:19:32.008 } 00:19:32.008 Got JSON-RPC error response 00:19:32.008 response: 00:19:32.008 { 00:19:32.008 "code": -126, 00:19:32.008 "message": "Required key not available" 00:19:32.008 } 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1244919 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1244919 ']' 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1244919 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1244919 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1244919' 00:19:32.008 killing process with pid 1244919 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1244919 00:19:32.008 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.008 00:19:32.008 Latency(us) 00:19:32.008 [2024-10-08T16:35:26.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.008 [2024-10-08T16:35:26.065Z] =================================================================================================================== 00:19:32.008 [2024-10-08T16:35:26.065Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1244919 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1242206 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1242206 ']' 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1242206 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.008 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1242206 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1242206' 00:19:32.269 killing process with pid 1242206 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1242206 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1242206 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1245271 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1245271 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1245271 ']' 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.269 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.269 [2024-10-08 18:35:26.302078] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:32.269 [2024-10-08 18:35:26.302132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.529 [2024-10-08 18:35:26.385527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.529 [2024-10-08 18:35:26.437467] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.529 [2024-10-08 18:35:26.437501] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.529 [2024-10-08 18:35:26.437507] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.529 [2024-10-08 18:35:26.437511] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.529 [2024-10-08 18:35:26.437516] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.529 [2024-10-08 18:35:26.437995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ca1e6yuQlX 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ca1e6yuQlX 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ca1e6yuQlX 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ca1e6yuQlX 00:19:33.100 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.360 [2024-10-08 18:35:27.301169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.360 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:33.620 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:33.620 [2024-10-08 18:35:27.670075] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.620 [2024-10-08 18:35:27.670271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.881 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:33.881 malloc0 00:19:33.881 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:34.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ca1e6yuQlX 00:19:34.401 [2024-10-08 18:35:28.211945] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ca1e6yuQlX': 0100666 00:19:34.401 [2024-10-08 18:35:28.211968] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:34.401 request: 00:19:34.401 { 00:19:34.401 "name": "key0", 00:19:34.401 "path": "/tmp/tmp.ca1e6yuQlX", 00:19:34.401 "method": "keyring_file_add_key", 00:19:34.401 "req_id": 1 00:19:34.401 } 00:19:34.401 Got JSON-RPC error response 00:19:34.401 response: 00:19:34.401 { 00:19:34.401 "code": -1, 00:19:34.401 "message": "Operation not permitted" 00:19:34.401 } 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.401 [2024-10-08 18:35:28.388395] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:34.401 [2024-10-08 18:35:28.388420] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:34.401 request: 00:19:34.401 { 00:19:34.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.401 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.401 "psk": "key0", 00:19:34.401 "method": "nvmf_subsystem_add_host", 00:19:34.401 "req_id": 1 00:19:34.401 } 00:19:34.401 Got JSON-RPC error response 00:19:34.401 response: 00:19:34.401 { 00:19:34.401 "code": -32603, 00:19:34.401 "message": "Internal error" 00:19:34.401 } 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1245271 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1245271 ']' 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1245271 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:34.401 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1245271 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1245271' 00:19:34.661 killing process with pid 1245271 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1245271 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1245271 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ca1e6yuQlX 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1245650 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1245650 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1245650 ']' 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.661 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.661 [2024-10-08 18:35:28.671152] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:34.661 [2024-10-08 18:35:28.671207] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.921 [2024-10-08 18:35:28.754845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.921 [2024-10-08 18:35:28.808730] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.921 [2024-10-08 18:35:28.808765] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.921 [2024-10-08 18:35:28.808771] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.922 [2024-10-08 18:35:28.808775] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.922 [2024-10-08 18:35:28.808779] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.922 [2024-10-08 18:35:28.809270] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.492 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.492 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:35.492 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:35.492 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.492 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.492 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.492 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ca1e6yuQlX 00:19:35.492 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ca1e6yuQlX 00:19:35.492 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.752 [2024-10-08 18:35:29.668582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.752 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:36.012 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:36.012 [2024-10-08 18:35:30.005412] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.012 [2024-10-08 18:35:30.005614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.012 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.272 malloc0 00:19:36.272 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:36.533 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ca1e6yuQlX 00:19:36.533 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.793 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1246034 00:19:36.793 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.793 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.793 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1246034 /var/tmp/bdevperf.sock 00:19:36.793 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1246034 ']' 00:19:36.793 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.793 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.793 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.793 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.793 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.793 [2024-10-08 18:35:30.755278] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:36.793 [2024-10-08 18:35:30.755335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246034 ] 00:19:36.793 [2024-10-08 18:35:30.830931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.052 [2024-10-08 18:35:30.883727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:37.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ca1e6yuQlX 00:19:37.881 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.881 [2024-10-08 18:35:31.874259] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.139 TLSTESTn1 00:19:38.139 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:38.399 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:38.399 "subsystems": [ 00:19:38.399 { 00:19:38.399 "subsystem": "keyring", 00:19:38.399 "config": [ 00:19:38.399 { 00:19:38.399 "method": "keyring_file_add_key", 00:19:38.399 "params": { 00:19:38.399 "name": "key0", 00:19:38.399 "path": "/tmp/tmp.ca1e6yuQlX" 00:19:38.399 } 00:19:38.399 } 00:19:38.399 ] 00:19:38.399 }, 00:19:38.399 { 00:19:38.399 "subsystem": "iobuf", 00:19:38.399 "config": [ 00:19:38.399 { 00:19:38.399 "method": "iobuf_set_options", 00:19:38.399 "params": { 00:19:38.399 "small_pool_count": 8192, 00:19:38.399 "large_pool_count": 1024, 00:19:38.399 "small_bufsize": 8192, 00:19:38.399 "large_bufsize": 135168 00:19:38.399 } 00:19:38.399 } 00:19:38.399 ] 00:19:38.399 }, 00:19:38.399 { 00:19:38.399 "subsystem": "sock", 00:19:38.399 "config": [ 00:19:38.399 { 00:19:38.399 "method": "sock_set_default_impl", 00:19:38.399 "params": { 00:19:38.399 "impl_name": "posix" 00:19:38.399 } 00:19:38.399 }, 00:19:38.399 { 00:19:38.399 "method": "sock_impl_set_options", 00:19:38.399 "params": { 00:19:38.399 "impl_name": "ssl", 00:19:38.400 "recv_buf_size": 4096, 00:19:38.400 "send_buf_size": 4096, 00:19:38.400 "enable_recv_pipe": true, 00:19:38.400 "enable_quickack": false, 00:19:38.400 "enable_placement_id": 0, 00:19:38.400 "enable_zerocopy_send_server": true, 00:19:38.400 "enable_zerocopy_send_client": false, 00:19:38.400 "zerocopy_threshold": 0, 00:19:38.400 "tls_version": 0, 00:19:38.400 "enable_ktls": false 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "sock_impl_set_options", 00:19:38.400 "params": { 00:19:38.400 "impl_name": "posix", 00:19:38.400 "recv_buf_size": 2097152, 00:19:38.400 "send_buf_size": 2097152, 00:19:38.400 "enable_recv_pipe": true, 00:19:38.400 "enable_quickack": false, 00:19:38.400 "enable_placement_id": 0, 00:19:38.400 "enable_zerocopy_send_server": true, 00:19:38.400 "enable_zerocopy_send_client": false, 00:19:38.400 "zerocopy_threshold": 0, 00:19:38.400 "tls_version": 0, 00:19:38.400 "enable_ktls": false 00:19:38.400 } 00:19:38.400 } 00:19:38.400 ] 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "subsystem": "vmd", 00:19:38.400 "config": [] 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "subsystem": "accel", 00:19:38.400 "config": [ 00:19:38.400 { 00:19:38.400 "method": "accel_set_options", 00:19:38.400 "params": { 00:19:38.400 "small_cache_size": 128, 00:19:38.400 "large_cache_size": 16, 00:19:38.400 "task_count": 2048, 00:19:38.400 "sequence_count": 2048, 00:19:38.400 "buf_count": 2048 00:19:38.400 } 00:19:38.400 } 00:19:38.400 ] 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "subsystem": "bdev", 00:19:38.400 "config": [ 00:19:38.400 { 00:19:38.400 "method": "bdev_set_options", 00:19:38.400 "params": { 00:19:38.400 "bdev_io_pool_size": 65535, 00:19:38.400 "bdev_io_cache_size": 256, 00:19:38.400 "bdev_auto_examine": true, 00:19:38.400 "iobuf_small_cache_size": 128, 00:19:38.400 "iobuf_large_cache_size": 16 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "bdev_raid_set_options", 00:19:38.400 "params": { 00:19:38.400 "process_window_size_kb": 1024, 00:19:38.400 "process_max_bandwidth_mb_sec": 0 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "bdev_iscsi_set_options", 00:19:38.400 "params": { 00:19:38.400 "timeout_sec": 30 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "bdev_nvme_set_options", 00:19:38.400 "params": { 00:19:38.400 "action_on_timeout": "none", 00:19:38.400 "timeout_us": 0, 00:19:38.400 "timeout_admin_us": 0, 00:19:38.400 "keep_alive_timeout_ms": 10000, 00:19:38.400 "arbitration_burst": 0, 00:19:38.400 "low_priority_weight": 0, 00:19:38.400 "medium_priority_weight": 0, 00:19:38.400 "high_priority_weight": 0, 00:19:38.400 "nvme_adminq_poll_period_us": 10000, 00:19:38.400 "nvme_ioq_poll_period_us": 0, 00:19:38.400 "io_queue_requests": 0, 00:19:38.400 "delay_cmd_submit": true, 00:19:38.400 "transport_retry_count": 4, 00:19:38.400 "bdev_retry_count": 3, 00:19:38.400 "transport_ack_timeout": 0, 00:19:38.400 "ctrlr_loss_timeout_sec": 0, 00:19:38.400 "reconnect_delay_sec": 0, 00:19:38.400 "fast_io_fail_timeout_sec": 0, 00:19:38.400 "disable_auto_failback": false, 00:19:38.400 "generate_uuids": false, 00:19:38.400 "transport_tos": 0, 00:19:38.400 "nvme_error_stat": false, 00:19:38.400 "rdma_srq_size": 0, 00:19:38.400 "io_path_stat": false, 00:19:38.400 "allow_accel_sequence": false, 00:19:38.400 "rdma_max_cq_size": 0, 00:19:38.400 "rdma_cm_event_timeout_ms": 0, 00:19:38.400 "dhchap_digests": [ 00:19:38.400 "sha256", 00:19:38.400 "sha384", 00:19:38.400 "sha512" 00:19:38.400 ], 00:19:38.400 "dhchap_dhgroups": [ 00:19:38.400 "null", 00:19:38.400 "ffdhe2048", 00:19:38.400 "ffdhe3072", 00:19:38.400 "ffdhe4096", 00:19:38.400 "ffdhe6144", 00:19:38.400 "ffdhe8192" 00:19:38.400 ] 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "bdev_nvme_set_hotplug", 00:19:38.400 "params": { 00:19:38.400 "period_us": 100000, 00:19:38.400 "enable": false 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "bdev_malloc_create", 00:19:38.400 "params": { 00:19:38.400 "name": "malloc0", 00:19:38.400 "num_blocks": 8192, 00:19:38.400 "block_size": 4096, 00:19:38.400 "physical_block_size": 4096, 00:19:38.400 "uuid": "318361dd-2af9-4ad9-90c5-14e1560efca0", 00:19:38.400 "optimal_io_boundary": 0, 00:19:38.400 "md_size": 0, 00:19:38.400 "dif_type": 0, 00:19:38.400 "dif_is_head_of_md": false, 00:19:38.400 "dif_pi_format": 0 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "bdev_wait_for_examine" 00:19:38.400 } 00:19:38.400 ] 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "subsystem": "nbd", 00:19:38.400 "config": [] 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "subsystem": "scheduler", 00:19:38.400 "config": [ 00:19:38.400 { 00:19:38.400 "method": "framework_set_scheduler", 00:19:38.400 "params": { 00:19:38.400 "name": "static" 00:19:38.400 } 00:19:38.400 } 00:19:38.400 ] 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "subsystem": "nvmf", 00:19:38.400 "config": [ 00:19:38.400 { 00:19:38.400 "method": "nvmf_set_config", 00:19:38.400 "params": { 00:19:38.400 "discovery_filter": "match_any", 00:19:38.400 "admin_cmd_passthru": { 00:19:38.400 "identify_ctrlr": false 00:19:38.400 }, 00:19:38.400 "dhchap_digests": [ 00:19:38.400 "sha256", 00:19:38.400 "sha384", 00:19:38.400 "sha512" 00:19:38.400 ], 00:19:38.400 "dhchap_dhgroups": [ 00:19:38.400 "null", 00:19:38.400 "ffdhe2048", 00:19:38.400 "ffdhe3072", 00:19:38.400 "ffdhe4096", 00:19:38.400 "ffdhe6144", 00:19:38.400 "ffdhe8192" 00:19:38.400 ] 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "nvmf_set_max_subsystems", 00:19:38.400 "params": { 00:19:38.400 "max_subsystems": 1024 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "nvmf_set_crdt", 00:19:38.400 "params": { 00:19:38.400 "crdt1": 0, 00:19:38.400 "crdt2": 0, 00:19:38.400 "crdt3": 0 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "nvmf_create_transport", 00:19:38.400 "params": { 00:19:38.400 "trtype": "TCP", 00:19:38.400 "max_queue_depth": 128, 00:19:38.400 "max_io_qpairs_per_ctrlr": 127, 00:19:38.400 "in_capsule_data_size": 4096, 00:19:38.400 "max_io_size": 131072, 00:19:38.400 "io_unit_size": 131072, 00:19:38.400 "max_aq_depth": 128, 00:19:38.400 "num_shared_buffers": 511, 00:19:38.400 "buf_cache_size": 4294967295, 00:19:38.400 "dif_insert_or_strip": false, 00:19:38.400 "zcopy": false, 00:19:38.400 "c2h_success": false, 00:19:38.400 "sock_priority": 0, 00:19:38.400 "abort_timeout_sec": 1, 00:19:38.400 "ack_timeout": 0, 00:19:38.400 "data_wr_pool_size": 0 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.400 "method": "nvmf_create_subsystem", 00:19:38.400 "params": { 00:19:38.400 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.400 "allow_any_host": false, 00:19:38.400 "serial_number": "SPDK00000000000001", 00:19:38.400 "model_number": "SPDK bdev Controller", 00:19:38.400 "max_namespaces": 10, 00:19:38.400 "min_cntlid": 1, 00:19:38.400 "max_cntlid": 65519, 00:19:38.400 "ana_reporting": false 00:19:38.400 } 00:19:38.400 }, 00:19:38.400 { 00:19:38.401 "method": "nvmf_subsystem_add_host", 00:19:38.401 "params": { 00:19:38.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.401 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.401 "psk": "key0" 00:19:38.401 } 00:19:38.401 }, 00:19:38.401 { 00:19:38.401 "method": "nvmf_subsystem_add_ns", 00:19:38.401 "params": { 00:19:38.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.401 "namespace": { 00:19:38.401 "nsid": 1, 00:19:38.401 "bdev_name": "malloc0", 00:19:38.401 "nguid": "318361DD2AF94AD990C514E1560EFCA0", 00:19:38.401 "uuid": "318361dd-2af9-4ad9-90c5-14e1560efca0", 00:19:38.401 "no_auto_visible": false 00:19:38.401 } 00:19:38.401 } 00:19:38.401 }, 00:19:38.401 { 00:19:38.401 "method": "nvmf_subsystem_add_listener", 00:19:38.401 "params": { 00:19:38.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.401 "listen_address": { 00:19:38.401 "trtype": "TCP", 00:19:38.401 "adrfam": "IPv4", 00:19:38.401 "traddr": "10.0.0.2", 00:19:38.401 "trsvcid": "4420" 00:19:38.401 }, 00:19:38.401 "secure_channel": true 00:19:38.401 } 00:19:38.401 } 00:19:38.401 ] 00:19:38.401 } 00:19:38.401 ] 00:19:38.401 }' 00:19:38.401 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:38.662 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:38.662 "subsystems": [ 00:19:38.662 { 00:19:38.662 "subsystem": "keyring", 00:19:38.662 "config": [ 00:19:38.662 { 00:19:38.662 "method": "keyring_file_add_key", 00:19:38.662 "params": { 00:19:38.662 "name": "key0", 00:19:38.662 "path": "/tmp/tmp.ca1e6yuQlX" 00:19:38.662 } 00:19:38.662 } 00:19:38.662 ] 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "subsystem": "iobuf", 00:19:38.662 "config": [ 00:19:38.662 { 00:19:38.662 "method": "iobuf_set_options", 00:19:38.662 "params": { 00:19:38.662 "small_pool_count": 8192, 00:19:38.662 "large_pool_count": 1024, 00:19:38.662 "small_bufsize": 8192, 00:19:38.662 "large_bufsize": 135168 00:19:38.662 } 00:19:38.662 } 00:19:38.662 ] 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "subsystem": "sock", 00:19:38.662 "config": [ 00:19:38.662 { 00:19:38.662 "method": "sock_set_default_impl", 00:19:38.662 "params": { 00:19:38.662 "impl_name": "posix" 00:19:38.662 } 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "method": "sock_impl_set_options", 00:19:38.662 "params": { 00:19:38.662 "impl_name": "ssl", 00:19:38.662 "recv_buf_size": 4096, 00:19:38.662 "send_buf_size": 4096, 00:19:38.662 "enable_recv_pipe": true, 00:19:38.662 "enable_quickack": false, 00:19:38.662 "enable_placement_id": 0, 00:19:38.662 "enable_zerocopy_send_server": true, 00:19:38.662 "enable_zerocopy_send_client": false, 00:19:38.662 "zerocopy_threshold": 0, 00:19:38.662 "tls_version": 0, 00:19:38.662 "enable_ktls": false 00:19:38.662 } 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "method": "sock_impl_set_options", 00:19:38.662 "params": { 00:19:38.662 "impl_name": "posix", 00:19:38.662 "recv_buf_size": 2097152, 00:19:38.662 "send_buf_size": 2097152, 00:19:38.662 "enable_recv_pipe": true, 00:19:38.662 "enable_quickack": false, 00:19:38.662 "enable_placement_id": 0, 00:19:38.662 "enable_zerocopy_send_server": true, 00:19:38.662 "enable_zerocopy_send_client": false, 00:19:38.662 "zerocopy_threshold": 0, 00:19:38.662 "tls_version": 0, 00:19:38.662 "enable_ktls": false 00:19:38.662 } 00:19:38.662 } 00:19:38.662 ] 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "subsystem": "vmd", 00:19:38.662 "config": [] 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "subsystem": "accel", 00:19:38.662 "config": [ 00:19:38.662 { 00:19:38.662 "method": "accel_set_options", 00:19:38.662 "params": { 00:19:38.662 "small_cache_size": 128, 00:19:38.662 "large_cache_size": 16, 00:19:38.662 "task_count": 2048, 00:19:38.662 "sequence_count": 2048, 00:19:38.662 "buf_count": 2048 00:19:38.662 } 00:19:38.662 } 00:19:38.662 ] 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "subsystem": "bdev", 00:19:38.662 "config": [ 00:19:38.662 { 00:19:38.662 "method": "bdev_set_options", 00:19:38.662 "params": { 00:19:38.662 "bdev_io_pool_size": 65535, 00:19:38.662 "bdev_io_cache_size": 256, 00:19:38.662 "bdev_auto_examine": true, 00:19:38.662 "iobuf_small_cache_size": 128, 00:19:38.662 "iobuf_large_cache_size": 16 00:19:38.662 } 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "method": "bdev_raid_set_options", 00:19:38.662 "params": { 00:19:38.662 "process_window_size_kb": 1024, 00:19:38.662 "process_max_bandwidth_mb_sec": 0 00:19:38.662 } 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "method": "bdev_iscsi_set_options", 00:19:38.662 "params": { 00:19:38.662 "timeout_sec": 30 00:19:38.662 } 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "method": "bdev_nvme_set_options", 00:19:38.662 "params": { 00:19:38.662 "action_on_timeout": "none", 00:19:38.662 "timeout_us": 0, 00:19:38.662 "timeout_admin_us": 0, 00:19:38.662 "keep_alive_timeout_ms": 10000, 00:19:38.662 "arbitration_burst": 0, 00:19:38.662 "low_priority_weight": 0, 00:19:38.662 "medium_priority_weight": 0, 00:19:38.662 "high_priority_weight": 0, 00:19:38.662 "nvme_adminq_poll_period_us": 10000, 00:19:38.662 "nvme_ioq_poll_period_us": 0, 00:19:38.662 "io_queue_requests": 512, 00:19:38.662 "delay_cmd_submit": true, 00:19:38.662 "transport_retry_count": 4, 00:19:38.662 "bdev_retry_count": 3, 00:19:38.662 "transport_ack_timeout": 0, 00:19:38.662 "ctrlr_loss_timeout_sec": 0, 00:19:38.662 "reconnect_delay_sec": 0, 00:19:38.662 "fast_io_fail_timeout_sec": 0, 00:19:38.662 "disable_auto_failback": false, 00:19:38.662 "generate_uuids": false, 00:19:38.662 "transport_tos": 0, 00:19:38.662 "nvme_error_stat": false, 00:19:38.662 "rdma_srq_size": 0, 00:19:38.662 "io_path_stat": false, 00:19:38.662 "allow_accel_sequence": false, 00:19:38.662 "rdma_max_cq_size": 0, 00:19:38.662 "rdma_cm_event_timeout_ms": 0, 00:19:38.662 "dhchap_digests": [ 00:19:38.662 "sha256", 00:19:38.662 "sha384", 00:19:38.662 "sha512" 00:19:38.662 ], 00:19:38.662 "dhchap_dhgroups": [ 00:19:38.662 "null", 00:19:38.662 "ffdhe2048", 00:19:38.662 "ffdhe3072", 00:19:38.662 "ffdhe4096", 00:19:38.662 "ffdhe6144", 00:19:38.662 "ffdhe8192" 00:19:38.662 ] 00:19:38.662 } 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "method": "bdev_nvme_attach_controller", 00:19:38.662 "params": { 00:19:38.662 "name": "TLSTEST", 00:19:38.662 "trtype": "TCP", 00:19:38.662 "adrfam": "IPv4", 00:19:38.662 "traddr": "10.0.0.2", 00:19:38.662 "trsvcid": "4420", 00:19:38.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.662 "prchk_reftag": false, 00:19:38.662 "prchk_guard": false, 00:19:38.662 "ctrlr_loss_timeout_sec": 0, 00:19:38.662 "reconnect_delay_sec": 0, 00:19:38.662 "fast_io_fail_timeout_sec": 0, 00:19:38.662 "psk": "key0", 00:19:38.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.662 "hdgst": false, 00:19:38.662 "ddgst": false, 00:19:38.662 "multipath": "multipath" 00:19:38.662 } 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "method": "bdev_nvme_set_hotplug", 00:19:38.662 "params": { 00:19:38.662 "period_us": 100000, 00:19:38.662 "enable": false 00:19:38.662 } 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "method": "bdev_wait_for_examine" 00:19:38.662 } 00:19:38.662 ] 00:19:38.662 }, 00:19:38.662 { 00:19:38.662 "subsystem": "nbd", 00:19:38.662 "config": [] 00:19:38.662 } 00:19:38.662 ] 00:19:38.662 }' 00:19:38.662 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1246034 00:19:38.662 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1246034 ']' 00:19:38.662 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1246034 00:19:38.662 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:38.662 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.662 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1246034 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1246034' 00:19:38.663 killing process with pid 1246034 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1246034 00:19:38.663 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.663 00:19:38.663 Latency(us) 00:19:38.663 [2024-10-08T16:35:32.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.663 [2024-10-08T16:35:32.720Z] =================================================================================================================== 00:19:38.663 [2024-10-08T16:35:32.720Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1246034 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1245650 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1245650 ']' 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1245650 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.663 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1245650 00:19:38.924 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:38.924 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:38.924 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1245650' 00:19:38.924 killing process with pid 1245650 00:19:38.924 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1245650 00:19:38.924 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1245650 00:19:38.924 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:38.924 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:38.924 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.924 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.924 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:38.924 "subsystems": [ 00:19:38.924 { 00:19:38.924 "subsystem": "keyring", 00:19:38.924 "config": [ 00:19:38.924 { 00:19:38.924 "method": "keyring_file_add_key", 00:19:38.924 "params": { 00:19:38.924 "name": "key0", 00:19:38.924 "path": "/tmp/tmp.ca1e6yuQlX" 00:19:38.924 } 00:19:38.924 } 00:19:38.924 ] 00:19:38.924 }, 00:19:38.924 { 00:19:38.924 "subsystem": "iobuf", 00:19:38.924 "config": [ 00:19:38.924 { 00:19:38.924 "method": "iobuf_set_options", 00:19:38.924 "params": { 00:19:38.924 "small_pool_count": 8192, 00:19:38.924 "large_pool_count": 1024, 00:19:38.924 "small_bufsize": 8192, 00:19:38.924 "large_bufsize": 135168 00:19:38.924 } 00:19:38.924 } 00:19:38.924 ] 00:19:38.924 }, 00:19:38.924 { 00:19:38.924 "subsystem": "sock", 00:19:38.924 "config": [ 00:19:38.924 { 00:19:38.924 "method": "sock_set_default_impl", 00:19:38.924 "params": { 00:19:38.924 "impl_name": "posix" 00:19:38.924 } 00:19:38.924 }, 00:19:38.924 { 00:19:38.924 "method": "sock_impl_set_options", 00:19:38.924 "params": { 00:19:38.924 "impl_name": "ssl", 00:19:38.924 "recv_buf_size": 4096, 00:19:38.924 "send_buf_size": 4096, 00:19:38.924 "enable_recv_pipe": true, 00:19:38.924 "enable_quickack": false, 00:19:38.924 "enable_placement_id": 0, 00:19:38.924 "enable_zerocopy_send_server": true, 00:19:38.924 "enable_zerocopy_send_client": false, 00:19:38.924 "zerocopy_threshold": 0, 00:19:38.924 "tls_version": 0, 00:19:38.924 "enable_ktls": false 00:19:38.924 } 00:19:38.924 }, 00:19:38.924 { 00:19:38.924 "method": "sock_impl_set_options", 00:19:38.924 "params": { 00:19:38.924 "impl_name": "posix", 00:19:38.924 "recv_buf_size": 2097152, 00:19:38.924 "send_buf_size": 2097152, 00:19:38.924 "enable_recv_pipe": true, 00:19:38.924 "enable_quickack": false, 00:19:38.924 "enable_placement_id": 0, 00:19:38.924 "enable_zerocopy_send_server": true, 00:19:38.924 "enable_zerocopy_send_client": false, 00:19:38.924 "zerocopy_threshold": 0, 00:19:38.925 "tls_version": 0, 00:19:38.925 "enable_ktls": false 00:19:38.925 } 00:19:38.925 } 00:19:38.925 ] 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "subsystem": "vmd", 00:19:38.925 "config": [] 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "subsystem": "accel", 00:19:38.925 "config": [ 00:19:38.925 { 00:19:38.925 "method": "accel_set_options", 00:19:38.925 "params": { 00:19:38.925 "small_cache_size": 128, 00:19:38.925 "large_cache_size": 16, 00:19:38.925 "task_count": 2048, 00:19:38.925 "sequence_count": 2048, 00:19:38.925 "buf_count": 2048 00:19:38.925 } 00:19:38.925 } 00:19:38.925 ] 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "subsystem": "bdev", 00:19:38.925 "config": [ 00:19:38.925 { 00:19:38.925 "method": "bdev_set_options", 00:19:38.925 "params": { 00:19:38.925 "bdev_io_pool_size": 65535, 00:19:38.925 "bdev_io_cache_size": 256, 00:19:38.925 "bdev_auto_examine": true, 00:19:38.925 "iobuf_small_cache_size": 128, 00:19:38.925 "iobuf_large_cache_size": 16 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "bdev_raid_set_options", 00:19:38.925 "params": { 00:19:38.925 "process_window_size_kb": 1024, 00:19:38.925 "process_max_bandwidth_mb_sec": 0 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "bdev_iscsi_set_options", 00:19:38.925 "params": { 00:19:38.925 "timeout_sec": 30 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "bdev_nvme_set_options", 00:19:38.925 "params": { 00:19:38.925 "action_on_timeout": "none", 00:19:38.925 "timeout_us": 0, 00:19:38.925 "timeout_admin_us": 0, 00:19:38.925 "keep_alive_timeout_ms": 10000, 00:19:38.925 "arbitration_burst": 0, 00:19:38.925 "low_priority_weight": 0, 00:19:38.925 "medium_priority_weight": 0, 00:19:38.925 "high_priority_weight": 0, 00:19:38.925 "nvme_adminq_poll_period_us": 10000, 00:19:38.925 "nvme_ioq_poll_period_us": 0, 00:19:38.925 "io_queue_requests": 0, 00:19:38.925 "delay_cmd_submit": true, 00:19:38.925 "transport_retry_count": 4, 00:19:38.925 "bdev_retry_count": 3, 00:19:38.925 "transport_ack_timeout": 0, 00:19:38.925 "ctrlr_loss_timeout_sec": 0, 00:19:38.925 "reconnect_delay_sec": 0, 00:19:38.925 "fast_io_fail_timeout_sec": 0, 00:19:38.925 "disable_auto_failback": false, 00:19:38.925 "generate_uuids": false, 00:19:38.925 "transport_tos": 0, 00:19:38.925 "nvme_error_stat": false, 00:19:38.925 "rdma_srq_size": 0, 00:19:38.925 "io_path_stat": false, 00:19:38.925 "allow_accel_sequence": false, 00:19:38.925 "rdma_max_cq_size": 0, 00:19:38.925 "rdma_cm_event_timeout_ms": 0, 00:19:38.925 "dhchap_digests": [ 00:19:38.925 "sha256", 00:19:38.925 "sha384", 00:19:38.925 "sha512" 00:19:38.925 ], 00:19:38.925 "dhchap_dhgroups": [ 00:19:38.925 "null", 00:19:38.925 "ffdhe2048", 00:19:38.925 "ffdhe3072", 00:19:38.925 "ffdhe4096", 00:19:38.925 "ffdhe6144", 00:19:38.925 "ffdhe8192" 00:19:38.925 ] 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "bdev_nvme_set_hotplug", 00:19:38.925 "params": { 00:19:38.925 "period_us": 100000, 00:19:38.925 "enable": false 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "bdev_malloc_create", 00:19:38.925 "params": { 00:19:38.925 "name": "malloc0", 00:19:38.925 "num_blocks": 8192, 00:19:38.925 "block_size": 4096, 00:19:38.925 "physical_block_size": 4096, 00:19:38.925 "uuid": "318361dd-2af9-4ad9-90c5-14e1560efca0", 00:19:38.925 "optimal_io_boundary": 0, 00:19:38.925 "md_size": 0, 00:19:38.925 "dif_type": 0, 00:19:38.925 "dif_is_head_of_md": false, 00:19:38.925 "dif_pi_format": 0 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "bdev_wait_for_examine" 00:19:38.925 } 00:19:38.925 ] 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "subsystem": "nbd", 00:19:38.925 "config": [] 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "subsystem": "scheduler", 00:19:38.925 "config": [ 00:19:38.925 { 00:19:38.925 "method": "framework_set_scheduler", 00:19:38.925 "params": { 00:19:38.925 "name": "static" 00:19:38.925 } 00:19:38.925 } 00:19:38.925 ] 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "subsystem": "nvmf", 00:19:38.925 "config": [ 00:19:38.925 { 00:19:38.925 "method": "nvmf_set_config", 00:19:38.925 "params": { 00:19:38.925 "discovery_filter": "match_any", 00:19:38.925 "admin_cmd_passthru": { 00:19:38.925 "identify_ctrlr": false 00:19:38.925 }, 00:19:38.925 "dhchap_digests": [ 00:19:38.925 "sha256", 00:19:38.925 "sha384", 00:19:38.925 "sha512" 00:19:38.925 ], 00:19:38.925 "dhchap_dhgroups": [ 00:19:38.925 "null", 00:19:38.925 "ffdhe2048", 00:19:38.925 "ffdhe3072", 00:19:38.925 "ffdhe4096", 00:19:38.925 "ffdhe6144", 00:19:38.925 "ffdhe8192" 00:19:38.925 ] 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "nvmf_set_max_subsystems", 00:19:38.925 "params": { 00:19:38.925 "max_subsystems": 1024 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "nvmf_set_crdt", 00:19:38.925 "params": { 00:19:38.925 "crdt1": 0, 00:19:38.925 "crdt2": 0, 00:19:38.925 "crdt3": 0 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "nvmf_create_transport", 00:19:38.925 "params": { 00:19:38.925 "trtype": "TCP", 00:19:38.925 "max_queue_depth": 128, 00:19:38.925 "max_io_qpairs_per_ctrlr": 127, 00:19:38.925 "in_capsule_data_size": 4096, 00:19:38.925 "max_io_size": 131072, 00:19:38.925 "io_unit_size": 131072, 00:19:38.925 "max_aq_depth": 128, 00:19:38.925 "num_shared_buffers": 511, 00:19:38.925 "buf_cache_size": 4294967295, 00:19:38.925 "dif_insert_or_strip": false, 00:19:38.925 "zcopy": false, 00:19:38.925 "c2h_success": false, 00:19:38.925 "sock_priority": 0, 00:19:38.925 "abort_timeout_sec": 1, 00:19:38.925 "ack_timeout": 0, 00:19:38.925 "data_wr_pool_size": 0 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "nvmf_create_subsystem", 00:19:38.925 "params": { 00:19:38.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.925 "allow_any_host": false, 00:19:38.925 "serial_number": "SPDK00000000000001", 00:19:38.925 "model_number": "SPDK bdev Controller", 00:19:38.925 "max_namespaces": 10, 00:19:38.925 "min_cntlid": 1, 00:19:38.925 "max_cntlid": 65519, 00:19:38.925 "ana_reporting": false 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "nvmf_subsystem_add_host", 00:19:38.925 "params": { 00:19:38.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.925 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.925 "psk": "key0" 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "nvmf_subsystem_add_ns", 00:19:38.925 "params": { 00:19:38.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.925 "namespace": { 00:19:38.925 "nsid": 1, 00:19:38.925 "bdev_name": "malloc0", 00:19:38.925 "nguid": "318361DD2AF94AD990C514E1560EFCA0", 00:19:38.925 "uuid": "318361dd-2af9-4ad9-90c5-14e1560efca0", 00:19:38.925 "no_auto_visible": false 00:19:38.925 } 00:19:38.925 } 00:19:38.925 }, 00:19:38.925 { 00:19:38.925 "method": "nvmf_subsystem_add_listener", 00:19:38.925 "params": { 00:19:38.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.925 "listen_address": { 00:19:38.925 "trtype": "TCP", 00:19:38.925 "adrfam": "IPv4", 00:19:38.925 "traddr": "10.0.0.2", 00:19:38.925 "trsvcid": "4420" 00:19:38.925 }, 00:19:38.925 "secure_channel": true 00:19:38.925 } 00:19:38.925 } 00:19:38.925 ] 00:19:38.925 } 00:19:38.925 ] 00:19:38.925 }' 00:19:38.925 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1246586 00:19:38.925 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:38.925 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1246586 00:19:38.926 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1246586 ']' 00:19:38.926 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.926 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.926 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.926 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.926 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.926 [2024-10-08 18:35:32.904185] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:38.926 [2024-10-08 18:35:32.904245] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.186 [2024-10-08 18:35:32.988400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.186 [2024-10-08 18:35:33.042203] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.186 [2024-10-08 18:35:33.042234] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.186 [2024-10-08 18:35:33.042240] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.186 [2024-10-08 18:35:33.042248] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.186 [2024-10-08 18:35:33.042252] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.186 [2024-10-08 18:35:33.042737] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.446 [2024-10-08 18:35:33.244138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.446 [2024-10-08 18:35:33.276162] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.446 [2024-10-08 18:35:33.276365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1246715 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1246715 /var/tmp/bdevperf.sock 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1246715 ']' 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.707 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:39.707 "subsystems": [ 00:19:39.707 { 00:19:39.707 "subsystem": "keyring", 00:19:39.707 "config": [ 00:19:39.707 { 00:19:39.707 "method": "keyring_file_add_key", 00:19:39.707 "params": { 00:19:39.707 "name": "key0", 00:19:39.707 "path": "/tmp/tmp.ca1e6yuQlX" 00:19:39.707 } 00:19:39.707 } 00:19:39.707 ] 00:19:39.707 }, 00:19:39.707 { 00:19:39.707 "subsystem": "iobuf", 00:19:39.707 "config": [ 00:19:39.707 { 00:19:39.707 "method": "iobuf_set_options", 00:19:39.707 "params": { 00:19:39.707 "small_pool_count": 8192, 00:19:39.707 "large_pool_count": 1024, 00:19:39.707 "small_bufsize": 8192, 00:19:39.707 "large_bufsize": 135168 00:19:39.707 } 00:19:39.707 } 00:19:39.707 ] 00:19:39.707 }, 00:19:39.707 { 00:19:39.707 "subsystem": "sock", 00:19:39.707 "config": [ 00:19:39.707 { 00:19:39.707 "method": "sock_set_default_impl", 00:19:39.707 "params": { 00:19:39.707 "impl_name": "posix" 00:19:39.707 } 00:19:39.707 }, 00:19:39.707 { 00:19:39.707 "method": "sock_impl_set_options", 00:19:39.707 "params": { 00:19:39.707 "impl_name": "ssl", 00:19:39.707 "recv_buf_size": 4096, 00:19:39.707 "send_buf_size": 4096, 00:19:39.707 "enable_recv_pipe": true, 00:19:39.707 "enable_quickack": false, 00:19:39.707 "enable_placement_id": 0, 00:19:39.707 "enable_zerocopy_send_server": true, 00:19:39.707 "enable_zerocopy_send_client": false, 00:19:39.707 "zerocopy_threshold": 0, 00:19:39.707 "tls_version": 0, 00:19:39.707 "enable_ktls": false 00:19:39.707 } 00:19:39.707 }, 00:19:39.707 { 00:19:39.707 "method": "sock_impl_set_options", 00:19:39.707 "params": { 00:19:39.707 "impl_name": "posix", 00:19:39.707 "recv_buf_size": 2097152, 00:19:39.707 "send_buf_size": 2097152, 00:19:39.707 "enable_recv_pipe": true, 00:19:39.707 "enable_quickack": false, 00:19:39.707 "enable_placement_id": 0, 00:19:39.707 "enable_zerocopy_send_server": true, 00:19:39.707 "enable_zerocopy_send_client": false, 00:19:39.707 "zerocopy_threshold": 0, 00:19:39.707 "tls_version": 0, 00:19:39.707 "enable_ktls": false 00:19:39.707 } 00:19:39.707 } 00:19:39.707 ] 00:19:39.707 }, 00:19:39.707 { 00:19:39.707 "subsystem": "vmd", 00:19:39.707 "config": [] 00:19:39.707 }, 00:19:39.707 { 00:19:39.707 "subsystem": "accel", 00:19:39.707 "config": [ 00:19:39.707 { 00:19:39.707 "method": "accel_set_options", 00:19:39.707 "params": { 00:19:39.707 "small_cache_size": 128, 00:19:39.707 "large_cache_size": 16, 00:19:39.707 "task_count": 2048, 00:19:39.707 "sequence_count": 2048, 00:19:39.707 "buf_count": 2048 00:19:39.707 } 00:19:39.707 } 00:19:39.707 ] 00:19:39.707 }, 00:19:39.707 { 00:19:39.707 "subsystem": "bdev", 00:19:39.707 "config": [ 00:19:39.707 { 00:19:39.707 "method": "bdev_set_options", 00:19:39.707 "params": { 00:19:39.707 "bdev_io_pool_size": 65535, 00:19:39.707 "bdev_io_cache_size": 256, 00:19:39.707 "bdev_auto_examine": true, 00:19:39.707 "iobuf_small_cache_size": 128, 00:19:39.707 "iobuf_large_cache_size": 16 00:19:39.707 } 00:19:39.707 }, 00:19:39.707 { 00:19:39.707 "method": "bdev_raid_set_options", 00:19:39.707 "params": { 00:19:39.707 "process_window_size_kb": 1024, 00:19:39.707 "process_max_bandwidth_mb_sec": 0 00:19:39.707 } 00:19:39.707 }, 00:19:39.707 { 00:19:39.707 "method": "bdev_iscsi_set_options", 00:19:39.707 "params": { 00:19:39.707 "timeout_sec": 30 00:19:39.707 } 00:19:39.707 }, 00:19:39.707 { 00:19:39.707 "method": "bdev_nvme_set_options", 00:19:39.707 "params": { 00:19:39.707 "action_on_timeout": "none", 00:19:39.707 "timeout_us": 0, 00:19:39.707 "timeout_admin_us": 0, 00:19:39.707 "keep_alive_timeout_ms": 10000, 00:19:39.707 "arbitration_burst": 0, 00:19:39.707 "low_priority_weight": 0, 00:19:39.707 "medium_priority_weight": 0, 00:19:39.707 "high_priority_weight": 0, 00:19:39.707 "nvme_adminq_poll_period_us": 10000, 00:19:39.707 "nvme_ioq_poll_period_us": 0, 00:19:39.707 "io_queue_requests": 512, 00:19:39.707 "delay_cmd_submit": true, 00:19:39.707 "transport_retry_count": 4, 00:19:39.707 "bdev_retry_count": 3, 00:19:39.707 "transport_ack_timeout": 0, 00:19:39.707 "ctrlr_loss_timeout_sec": 0, 00:19:39.707 "reconnect_delay_sec": 0, 00:19:39.707 "fast_io_fail_timeout_sec": 0, 00:19:39.707 "disable_auto_failback": false, 00:19:39.707 "generate_uuids": false, 00:19:39.707 "transport_tos": 0, 00:19:39.707 "nvme_error_stat": false, 00:19:39.707 "rdma_srq_size": 0, 00:19:39.707 "io_path_stat": false, 00:19:39.708 "allow_accel_sequence": false, 00:19:39.708 "rdma_max_cq_size": 0, 00:19:39.708 "rdma_cm_event_timeout_ms": 0, 00:19:39.708 "dhchap_digests": [ 00:19:39.708 "sha256", 00:19:39.708 "sha384", 00:19:39.708 "sha512" 00:19:39.708 ], 00:19:39.708 "dhchap_dhgroups": [ 00:19:39.708 "null", 00:19:39.708 "ffdhe2048", 00:19:39.708 "ffdhe3072", 00:19:39.708 "ffdhe4096", 00:19:39.708 "ffdhe6144", 00:19:39.708 "ffdhe8192" 00:19:39.708 ] 00:19:39.708 } 00:19:39.708 }, 00:19:39.708 { 00:19:39.708 "method": "bdev_nvme_attach_controller", 00:19:39.708 "params": { 00:19:39.708 "name": "TLSTEST", 00:19:39.708 "trtype": "TCP", 00:19:39.708 "adrfam": "IPv4", 00:19:39.708 "traddr": "10.0.0.2", 00:19:39.708 "trsvcid": "4420", 00:19:39.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.708 "prchk_reftag": false, 00:19:39.708 "prchk_guard": false, 00:19:39.708 "ctrlr_loss_timeout_sec": 0, 00:19:39.708 "reconnect_delay_sec": 0, 00:19:39.708 "fast_io_fail_timeout_sec": 0, 00:19:39.708 "psk": "key0", 00:19:39.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.708 "hdgst": false, 00:19:39.708 "ddgst": false, 00:19:39.708 "multipath": "multipath" 00:19:39.708 } 00:19:39.708 }, 00:19:39.708 { 00:19:39.708 "method": "bdev_nvme_set_hotplug", 00:19:39.708 "params": { 00:19:39.708 "period_us": 100000, 00:19:39.708 "enable": false 00:19:39.708 } 00:19:39.708 }, 00:19:39.708 { 00:19:39.708 "method": "bdev_wait_for_examine" 00:19:39.708 } 00:19:39.708 ] 00:19:39.708 }, 00:19:39.708 { 00:19:39.708 "subsystem": "nbd", 00:19:39.708 "config": [] 00:19:39.708 } 00:19:39.708 ] 00:19:39.708 }' 00:19:39.967 [2024-10-08 18:35:33.783191] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:39.968 [2024-10-08 18:35:33.783243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246715 ] 00:19:39.968 [2024-10-08 18:35:33.859546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.968 [2024-10-08 18:35:33.912276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.227 [2024-10-08 18:35:34.046218] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.796 18:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.796 18:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:40.796 18:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.796 Running I/O for 10 seconds... 00:19:42.674 4875.00 IOPS, 19.04 MiB/s [2024-10-08T16:35:37.671Z] 5236.50 IOPS, 20.46 MiB/s [2024-10-08T16:35:39.052Z] 5520.00 IOPS, 21.56 MiB/s [2024-10-08T16:35:39.991Z] 5589.25 IOPS, 21.83 MiB/s [2024-10-08T16:35:40.930Z] 5676.00 IOPS, 22.17 MiB/s [2024-10-08T16:35:41.870Z] 5589.67 IOPS, 21.83 MiB/s [2024-10-08T16:35:42.808Z] 5707.43 IOPS, 22.29 MiB/s [2024-10-08T16:35:43.749Z] 5740.00 IOPS, 22.42 MiB/s [2024-10-08T16:35:44.689Z] 5719.67 IOPS, 22.34 MiB/s [2024-10-08T16:35:44.950Z] 5780.70 IOPS, 22.58 MiB/s 00:19:50.893 Latency(us) 00:19:50.893 [2024-10-08T16:35:44.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.893 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.893 Verification LBA range: start 0x0 length 0x2000 00:19:50.893 TLSTESTn1 : 10.02 5782.25 22.59 0.00 0.00 22102.94 6417.07 29928.11 00:19:50.893 [2024-10-08T16:35:44.950Z] =================================================================================================================== 00:19:50.893 [2024-10-08T16:35:44.950Z] Total : 5782.25 22.59 0.00 0.00 22102.94 6417.07 29928.11 00:19:50.893 { 00:19:50.893 "results": [ 00:19:50.893 { 00:19:50.893 "job": "TLSTESTn1", 00:19:50.893 "core_mask": "0x4", 00:19:50.893 "workload": "verify", 00:19:50.893 "status": "finished", 00:19:50.893 "verify_range": { 00:19:50.893 "start": 0, 00:19:50.893 "length": 8192 00:19:50.893 }, 00:19:50.893 "queue_depth": 128, 00:19:50.893 "io_size": 4096, 00:19:50.893 "runtime": 10.019281, 00:19:50.893 "iops": 5782.251241381492, 00:19:50.893 "mibps": 22.586918911646453, 00:19:50.893 "io_failed": 0, 00:19:50.893 "io_timeout": 0, 00:19:50.893 "avg_latency_us": 22102.940252471206, 00:19:50.893 "min_latency_us": 6417.066666666667, 00:19:50.893 "max_latency_us": 29928.106666666667 00:19:50.893 } 00:19:50.893 ], 00:19:50.893 "core_count": 1 00:19:50.893 } 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1246715 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1246715 ']' 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1246715 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1246715 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1246715' 00:19:50.893 killing process with pid 1246715 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1246715 00:19:50.893 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.893 00:19:50.893 Latency(us) 00:19:50.893 [2024-10-08T16:35:44.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.893 [2024-10-08T16:35:44.950Z] =================================================================================================================== 00:19:50.893 [2024-10-08T16:35:44.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1246715 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1246586 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1246586 ']' 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1246586 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.893 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1246586 00:19:51.154 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:51.154 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:51.154 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1246586' 00:19:51.154 killing process with pid 1246586 00:19:51.154 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1246586 00:19:51.154 18:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1246586 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1248995 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1248995 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1248995 ']' 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.154 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.154 [2024-10-08 18:35:45.163669] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:51.154 [2024-10-08 18:35:45.163723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.552 [2024-10-08 18:35:45.250579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.552 [2024-10-08 18:35:45.333766] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.552 [2024-10-08 18:35:45.333833] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.552 [2024-10-08 18:35:45.333842] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.552 [2024-10-08 18:35:45.333849] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.552 [2024-10-08 18:35:45.333855] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.552 [2024-10-08 18:35:45.334686] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.189 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.189 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:52.189 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:52.189 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.189 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.189 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.189 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ca1e6yuQlX 00:19:52.189 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ca1e6yuQlX 00:19:52.189 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.189 [2024-10-08 18:35:46.180127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.189 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:52.448 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.710 [2024-10-08 18:35:46.545062] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.710 [2024-10-08 18:35:46.545415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.710 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.710 malloc0 00:19:52.971 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.971 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ca1e6yuQlX 00:19:53.232 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:53.492 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1249432 00:19:53.492 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:53.492 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.492 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1249432 /var/tmp/bdevperf.sock 00:19:53.492 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1249432 ']' 00:19:53.492 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.492 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.492 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.492 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.492 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.492 [2024-10-08 18:35:47.372765] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:53.492 [2024-10-08 18:35:47.372839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249432 ] 00:19:53.492 [2024-10-08 18:35:47.453547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.492 [2024-10-08 18:35:47.514304] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.433 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:54.433 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:54.433 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ca1e6yuQlX 00:19:54.433 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:54.433 [2024-10-08 18:35:48.462986] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.692 nvme0n1 00:19:54.692 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:54.692 Running I/O for 1 seconds... 00:19:55.630 5852.00 IOPS, 22.86 MiB/s 00:19:55.630 Latency(us) 00:19:55.630 [2024-10-08T16:35:49.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.630 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:55.630 Verification LBA range: start 0x0 length 0x2000 00:19:55.630 nvme0n1 : 1.02 5890.05 23.01 0.00 0.00 21564.72 6498.99 25012.91 00:19:55.630 [2024-10-08T16:35:49.687Z] =================================================================================================================== 00:19:55.630 [2024-10-08T16:35:49.687Z] Total : 5890.05 23.01 0.00 0.00 21564.72 6498.99 25012.91 00:19:55.630 { 00:19:55.630 "results": [ 00:19:55.630 { 00:19:55.630 "job": "nvme0n1", 00:19:55.630 "core_mask": "0x2", 00:19:55.630 "workload": "verify", 00:19:55.630 "status": "finished", 00:19:55.630 "verify_range": { 00:19:55.630 "start": 0, 00:19:55.630 "length": 8192 00:19:55.630 }, 00:19:55.630 "queue_depth": 128, 00:19:55.630 "io_size": 4096, 00:19:55.630 "runtime": 1.015272, 00:19:55.630 "iops": 5890.0471991742115, 00:19:55.630 "mibps": 23.007996871774264, 00:19:55.630 "io_failed": 0, 00:19:55.630 "io_timeout": 0, 00:19:55.630 "avg_latency_us": 21564.723085841695, 00:19:55.630 "min_latency_us": 6498.986666666667, 00:19:55.630 "max_latency_us": 25012.906666666666 00:19:55.630 } 00:19:55.630 ], 00:19:55.630 "core_count": 1 00:19:55.630 } 00:19:55.630 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1249432 00:19:55.630 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1249432 ']' 00:19:55.630 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1249432 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1249432 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1249432' 00:19:55.891 killing process with pid 1249432 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1249432 00:19:55.891 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.891 00:19:55.891 Latency(us) 00:19:55.891 [2024-10-08T16:35:49.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.891 [2024-10-08T16:35:49.948Z] =================================================================================================================== 00:19:55.891 [2024-10-08T16:35:49.948Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1249432 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1248995 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1248995 ']' 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1248995 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1248995 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1248995' 00:19:55.891 killing process with pid 1248995 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1248995 00:19:55.891 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1248995 00:19:56.152 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1249918 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1249918 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1249918 ']' 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.153 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.153 [2024-10-08 18:35:50.163845] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:56.153 [2024-10-08 18:35:50.163931] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.413 [2024-10-08 18:35:50.252440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.413 [2024-10-08 18:35:50.347509] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.413 [2024-10-08 18:35:50.347569] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.413 [2024-10-08 18:35:50.347577] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.413 [2024-10-08 18:35:50.347585] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.413 [2024-10-08 18:35:50.347591] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.414 [2024-10-08 18:35:50.348394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.984 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.984 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.984 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:56.984 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:56.984 18:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.984 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.984 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:56.984 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.984 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.984 [2024-10-08 18:35:51.011422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.244 malloc0 00:19:57.244 [2024-10-08 18:35:51.057844] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.244 [2024-10-08 18:35:51.058196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.244 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.244 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1250139 00:19:57.244 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1250139 /var/tmp/bdevperf.sock 00:19:57.244 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:57.244 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1250139 ']' 00:19:57.244 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.244 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.244 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.244 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.244 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.244 [2024-10-08 18:35:51.153052] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:19:57.244 [2024-10-08 18:35:51.153125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250139 ] 00:19:57.244 [2024-10-08 18:35:51.234173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.244 [2024-10-08 18:35:51.295062] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.184 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.184 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:58.184 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ca1e6yuQlX 00:19:58.184 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:58.443 [2024-10-08 18:35:52.256011] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.443 nvme0n1 00:19:58.443 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:58.443 Running I/O for 1 seconds... 00:19:59.643 5509.00 IOPS, 21.52 MiB/s 00:19:59.643 Latency(us) 00:19:59.643 [2024-10-08T16:35:53.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.643 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.643 Verification LBA range: start 0x0 length 0x2000 00:19:59.643 nvme0n1 : 1.01 5576.59 21.78 0.00 0.00 22818.75 4587.52 32331.09 00:19:59.643 [2024-10-08T16:35:53.700Z] =================================================================================================================== 00:19:59.643 [2024-10-08T16:35:53.700Z] Total : 5576.59 21.78 0.00 0.00 22818.75 4587.52 32331.09 00:19:59.643 { 00:19:59.643 "results": [ 00:19:59.643 { 00:19:59.643 "job": "nvme0n1", 00:19:59.643 "core_mask": "0x2", 00:19:59.643 "workload": "verify", 00:19:59.643 "status": "finished", 00:19:59.643 "verify_range": { 00:19:59.643 "start": 0, 00:19:59.643 "length": 8192 00:19:59.643 }, 00:19:59.643 "queue_depth": 128, 00:19:59.643 "io_size": 4096, 00:19:59.643 "runtime": 1.010832, 00:19:59.643 "iops": 5576.59433021511, 00:19:59.643 "mibps": 21.783571602402773, 00:19:59.643 "io_failed": 0, 00:19:59.643 "io_timeout": 0, 00:19:59.643 "avg_latency_us": 22818.745517119038, 00:19:59.643 "min_latency_us": 4587.52, 00:19:59.643 "max_latency_us": 32331.093333333334 00:19:59.643 } 00:19:59.643 ], 00:19:59.643 "core_count": 1 00:19:59.643 } 00:19:59.643 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:59.643 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.643 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.643 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.643 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:59.643 "subsystems": [ 00:19:59.643 { 00:19:59.643 "subsystem": "keyring", 00:19:59.643 "config": [ 00:19:59.643 { 00:19:59.643 "method": "keyring_file_add_key", 00:19:59.643 "params": { 00:19:59.643 "name": "key0", 00:19:59.643 "path": "/tmp/tmp.ca1e6yuQlX" 00:19:59.643 } 00:19:59.643 } 00:19:59.643 ] 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "subsystem": "iobuf", 00:19:59.643 "config": [ 00:19:59.643 { 00:19:59.643 "method": "iobuf_set_options", 00:19:59.643 "params": { 00:19:59.643 "small_pool_count": 8192, 00:19:59.643 "large_pool_count": 1024, 00:19:59.643 "small_bufsize": 8192, 00:19:59.643 "large_bufsize": 135168 00:19:59.643 } 00:19:59.643 } 00:19:59.643 ] 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "subsystem": "sock", 00:19:59.643 "config": [ 00:19:59.643 { 00:19:59.643 "method": "sock_set_default_impl", 00:19:59.643 "params": { 00:19:59.643 "impl_name": "posix" 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "sock_impl_set_options", 00:19:59.643 "params": { 00:19:59.643 "impl_name": "ssl", 00:19:59.643 "recv_buf_size": 4096, 00:19:59.643 "send_buf_size": 4096, 00:19:59.643 "enable_recv_pipe": true, 00:19:59.643 "enable_quickack": false, 00:19:59.643 "enable_placement_id": 0, 00:19:59.643 "enable_zerocopy_send_server": true, 00:19:59.643 "enable_zerocopy_send_client": false, 00:19:59.643 "zerocopy_threshold": 0, 00:19:59.643 "tls_version": 0, 00:19:59.643 "enable_ktls": false 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "sock_impl_set_options", 00:19:59.643 "params": { 00:19:59.643 "impl_name": "posix", 00:19:59.643 "recv_buf_size": 2097152, 00:19:59.643 "send_buf_size": 2097152, 00:19:59.643 "enable_recv_pipe": true, 00:19:59.643 "enable_quickack": false, 00:19:59.643 "enable_placement_id": 0, 00:19:59.643 "enable_zerocopy_send_server": true, 00:19:59.643 "enable_zerocopy_send_client": false, 00:19:59.643 "zerocopy_threshold": 0, 00:19:59.643 "tls_version": 0, 00:19:59.643 "enable_ktls": false 00:19:59.643 } 00:19:59.643 } 00:19:59.643 ] 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "subsystem": "vmd", 00:19:59.643 "config": [] 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "subsystem": "accel", 00:19:59.643 "config": [ 00:19:59.643 { 00:19:59.643 "method": "accel_set_options", 00:19:59.643 "params": { 00:19:59.643 "small_cache_size": 128, 00:19:59.643 "large_cache_size": 16, 00:19:59.643 "task_count": 2048, 00:19:59.643 "sequence_count": 2048, 00:19:59.643 "buf_count": 2048 00:19:59.643 } 00:19:59.643 } 00:19:59.643 ] 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "subsystem": "bdev", 00:19:59.643 "config": [ 00:19:59.643 { 00:19:59.643 "method": "bdev_set_options", 00:19:59.643 "params": { 00:19:59.643 "bdev_io_pool_size": 65535, 00:19:59.643 "bdev_io_cache_size": 256, 00:19:59.643 "bdev_auto_examine": true, 00:19:59.643 "iobuf_small_cache_size": 128, 00:19:59.643 "iobuf_large_cache_size": 16 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "bdev_raid_set_options", 00:19:59.643 "params": { 00:19:59.643 "process_window_size_kb": 1024, 00:19:59.643 "process_max_bandwidth_mb_sec": 0 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "bdev_iscsi_set_options", 00:19:59.643 "params": { 00:19:59.643 "timeout_sec": 30 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "bdev_nvme_set_options", 00:19:59.643 "params": { 00:19:59.643 "action_on_timeout": "none", 00:19:59.643 "timeout_us": 0, 00:19:59.643 "timeout_admin_us": 0, 00:19:59.643 "keep_alive_timeout_ms": 10000, 00:19:59.643 "arbitration_burst": 0, 00:19:59.643 "low_priority_weight": 0, 00:19:59.643 "medium_priority_weight": 0, 00:19:59.643 "high_priority_weight": 0, 00:19:59.643 "nvme_adminq_poll_period_us": 10000, 00:19:59.643 "nvme_ioq_poll_period_us": 0, 00:19:59.643 "io_queue_requests": 0, 00:19:59.643 "delay_cmd_submit": true, 00:19:59.643 "transport_retry_count": 4, 00:19:59.643 "bdev_retry_count": 3, 00:19:59.643 "transport_ack_timeout": 0, 00:19:59.643 "ctrlr_loss_timeout_sec": 0, 00:19:59.643 "reconnect_delay_sec": 0, 00:19:59.643 "fast_io_fail_timeout_sec": 0, 00:19:59.643 "disable_auto_failback": false, 00:19:59.643 "generate_uuids": false, 00:19:59.643 "transport_tos": 0, 00:19:59.643 "nvme_error_stat": false, 00:19:59.643 "rdma_srq_size": 0, 00:19:59.643 "io_path_stat": false, 00:19:59.643 "allow_accel_sequence": false, 00:19:59.643 "rdma_max_cq_size": 0, 00:19:59.643 "rdma_cm_event_timeout_ms": 0, 00:19:59.643 "dhchap_digests": [ 00:19:59.643 "sha256", 00:19:59.643 "sha384", 00:19:59.643 "sha512" 00:19:59.643 ], 00:19:59.643 "dhchap_dhgroups": [ 00:19:59.643 "null", 00:19:59.643 "ffdhe2048", 00:19:59.643 "ffdhe3072", 00:19:59.643 "ffdhe4096", 00:19:59.643 "ffdhe6144", 00:19:59.643 "ffdhe8192" 00:19:59.643 ] 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "bdev_nvme_set_hotplug", 00:19:59.643 "params": { 00:19:59.643 "period_us": 100000, 00:19:59.643 "enable": false 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "bdev_malloc_create", 00:19:59.643 "params": { 00:19:59.643 "name": "malloc0", 00:19:59.643 "num_blocks": 8192, 00:19:59.643 "block_size": 4096, 00:19:59.643 "physical_block_size": 4096, 00:19:59.643 "uuid": "0ef51ece-09d4-4ce4-941b-4434f2fc89f4", 00:19:59.643 "optimal_io_boundary": 0, 00:19:59.643 "md_size": 0, 00:19:59.643 "dif_type": 0, 00:19:59.643 "dif_is_head_of_md": false, 00:19:59.643 "dif_pi_format": 0 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "bdev_wait_for_examine" 00:19:59.643 } 00:19:59.643 ] 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "subsystem": "nbd", 00:19:59.643 "config": [] 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "subsystem": "scheduler", 00:19:59.643 "config": [ 00:19:59.643 { 00:19:59.643 "method": "framework_set_scheduler", 00:19:59.643 "params": { 00:19:59.643 "name": "static" 00:19:59.643 } 00:19:59.643 } 00:19:59.643 ] 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "subsystem": "nvmf", 00:19:59.643 "config": [ 00:19:59.643 { 00:19:59.643 "method": "nvmf_set_config", 00:19:59.643 "params": { 00:19:59.643 "discovery_filter": "match_any", 00:19:59.643 "admin_cmd_passthru": { 00:19:59.643 "identify_ctrlr": false 00:19:59.643 }, 00:19:59.643 "dhchap_digests": [ 00:19:59.643 "sha256", 00:19:59.643 "sha384", 00:19:59.643 "sha512" 00:19:59.643 ], 00:19:59.643 "dhchap_dhgroups": [ 00:19:59.643 "null", 00:19:59.643 "ffdhe2048", 00:19:59.643 "ffdhe3072", 00:19:59.643 "ffdhe4096", 00:19:59.643 "ffdhe6144", 00:19:59.643 "ffdhe8192" 00:19:59.643 ] 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "nvmf_set_max_subsystems", 00:19:59.643 "params": { 00:19:59.643 "max_subsystems": 1024 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "nvmf_set_crdt", 00:19:59.643 "params": { 00:19:59.643 "crdt1": 0, 00:19:59.643 "crdt2": 0, 00:19:59.643 "crdt3": 0 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "nvmf_create_transport", 00:19:59.643 "params": { 00:19:59.643 "trtype": "TCP", 00:19:59.643 "max_queue_depth": 128, 00:19:59.643 "max_io_qpairs_per_ctrlr": 127, 00:19:59.643 "in_capsule_data_size": 4096, 00:19:59.643 "max_io_size": 131072, 00:19:59.643 "io_unit_size": 131072, 00:19:59.643 "max_aq_depth": 128, 00:19:59.643 "num_shared_buffers": 511, 00:19:59.643 "buf_cache_size": 4294967295, 00:19:59.643 "dif_insert_or_strip": false, 00:19:59.643 "zcopy": false, 00:19:59.643 "c2h_success": false, 00:19:59.643 "sock_priority": 0, 00:19:59.643 "abort_timeout_sec": 1, 00:19:59.643 "ack_timeout": 0, 00:19:59.643 "data_wr_pool_size": 0 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "nvmf_create_subsystem", 00:19:59.643 "params": { 00:19:59.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.643 "allow_any_host": false, 00:19:59.643 "serial_number": "00000000000000000000", 00:19:59.643 "model_number": "SPDK bdev Controller", 00:19:59.643 "max_namespaces": 32, 00:19:59.643 "min_cntlid": 1, 00:19:59.643 "max_cntlid": 65519, 00:19:59.643 "ana_reporting": false 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "nvmf_subsystem_add_host", 00:19:59.643 "params": { 00:19:59.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.643 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.643 "psk": "key0" 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "nvmf_subsystem_add_ns", 00:19:59.643 "params": { 00:19:59.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.643 "namespace": { 00:19:59.643 "nsid": 1, 00:19:59.643 "bdev_name": "malloc0", 00:19:59.643 "nguid": "0EF51ECE09D44CE4941B4434F2FC89F4", 00:19:59.643 "uuid": "0ef51ece-09d4-4ce4-941b-4434f2fc89f4", 00:19:59.643 "no_auto_visible": false 00:19:59.643 } 00:19:59.643 } 00:19:59.643 }, 00:19:59.643 { 00:19:59.643 "method": "nvmf_subsystem_add_listener", 00:19:59.643 "params": { 00:19:59.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.643 "listen_address": { 00:19:59.643 "trtype": "TCP", 00:19:59.643 "adrfam": "IPv4", 00:19:59.643 "traddr": "10.0.0.2", 00:19:59.643 "trsvcid": "4420" 00:19:59.643 }, 00:19:59.643 "secure_channel": false, 00:19:59.643 "sock_impl": "ssl" 00:19:59.643 } 00:19:59.643 } 00:19:59.643 ] 00:19:59.643 } 00:19:59.643 ] 00:19:59.643 }' 00:19:59.643 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:59.904 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:59.904 "subsystems": [ 00:19:59.904 { 00:19:59.904 "subsystem": "keyring", 00:19:59.904 "config": [ 00:19:59.904 { 00:19:59.904 "method": "keyring_file_add_key", 00:19:59.904 "params": { 00:19:59.904 "name": "key0", 00:19:59.904 "path": "/tmp/tmp.ca1e6yuQlX" 00:19:59.904 } 00:19:59.904 } 00:19:59.904 ] 00:19:59.904 }, 00:19:59.904 { 00:19:59.904 "subsystem": "iobuf", 00:19:59.904 "config": [ 00:19:59.904 { 00:19:59.904 "method": "iobuf_set_options", 00:19:59.904 "params": { 00:19:59.904 "small_pool_count": 8192, 00:19:59.904 "large_pool_count": 1024, 00:19:59.904 "small_bufsize": 8192, 00:19:59.904 "large_bufsize": 135168 00:19:59.904 } 00:19:59.904 } 00:19:59.904 ] 00:19:59.904 }, 00:19:59.904 { 00:19:59.904 "subsystem": "sock", 00:19:59.904 "config": [ 00:19:59.904 { 00:19:59.904 "method": "sock_set_default_impl", 00:19:59.904 "params": { 00:19:59.904 "impl_name": "posix" 00:19:59.904 } 00:19:59.904 }, 00:19:59.904 { 00:19:59.905 "method": "sock_impl_set_options", 00:19:59.905 "params": { 00:19:59.905 "impl_name": "ssl", 00:19:59.905 "recv_buf_size": 4096, 00:19:59.905 "send_buf_size": 4096, 00:19:59.905 "enable_recv_pipe": true, 00:19:59.905 "enable_quickack": false, 00:19:59.905 "enable_placement_id": 0, 00:19:59.905 "enable_zerocopy_send_server": true, 00:19:59.905 "enable_zerocopy_send_client": false, 00:19:59.905 "zerocopy_threshold": 0, 00:19:59.905 "tls_version": 0, 00:19:59.905 "enable_ktls": false 00:19:59.905 } 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "method": "sock_impl_set_options", 00:19:59.905 "params": { 00:19:59.905 "impl_name": "posix", 00:19:59.905 "recv_buf_size": 2097152, 00:19:59.905 "send_buf_size": 2097152, 00:19:59.905 "enable_recv_pipe": true, 00:19:59.905 "enable_quickack": false, 00:19:59.905 "enable_placement_id": 0, 00:19:59.905 "enable_zerocopy_send_server": true, 00:19:59.905 "enable_zerocopy_send_client": false, 00:19:59.905 "zerocopy_threshold": 0, 00:19:59.905 "tls_version": 0, 00:19:59.905 "enable_ktls": false 00:19:59.905 } 00:19:59.905 } 00:19:59.905 ] 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "subsystem": "vmd", 00:19:59.905 "config": [] 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "subsystem": "accel", 00:19:59.905 "config": [ 00:19:59.905 { 00:19:59.905 "method": "accel_set_options", 00:19:59.905 "params": { 00:19:59.905 "small_cache_size": 128, 00:19:59.905 "large_cache_size": 16, 00:19:59.905 "task_count": 2048, 00:19:59.905 "sequence_count": 2048, 00:19:59.905 "buf_count": 2048 00:19:59.905 } 00:19:59.905 } 00:19:59.905 ] 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "subsystem": "bdev", 00:19:59.905 "config": [ 00:19:59.905 { 00:19:59.905 "method": "bdev_set_options", 00:19:59.905 "params": { 00:19:59.905 "bdev_io_pool_size": 65535, 00:19:59.905 "bdev_io_cache_size": 256, 00:19:59.905 "bdev_auto_examine": true, 00:19:59.905 "iobuf_small_cache_size": 128, 00:19:59.905 "iobuf_large_cache_size": 16 00:19:59.905 } 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "method": "bdev_raid_set_options", 00:19:59.905 "params": { 00:19:59.905 "process_window_size_kb": 1024, 00:19:59.905 "process_max_bandwidth_mb_sec": 0 00:19:59.905 } 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "method": "bdev_iscsi_set_options", 00:19:59.905 "params": { 00:19:59.905 "timeout_sec": 30 00:19:59.905 } 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "method": "bdev_nvme_set_options", 00:19:59.905 "params": { 00:19:59.905 "action_on_timeout": "none", 00:19:59.905 "timeout_us": 0, 00:19:59.905 "timeout_admin_us": 0, 00:19:59.905 "keep_alive_timeout_ms": 10000, 00:19:59.905 "arbitration_burst": 0, 00:19:59.905 "low_priority_weight": 0, 00:19:59.905 "medium_priority_weight": 0, 00:19:59.905 "high_priority_weight": 0, 00:19:59.905 "nvme_adminq_poll_period_us": 10000, 00:19:59.905 "nvme_ioq_poll_period_us": 0, 00:19:59.905 "io_queue_requests": 512, 00:19:59.905 "delay_cmd_submit": true, 00:19:59.905 "transport_retry_count": 4, 00:19:59.905 "bdev_retry_count": 3, 00:19:59.905 "transport_ack_timeout": 0, 00:19:59.905 "ctrlr_loss_timeout_sec": 0, 00:19:59.905 "reconnect_delay_sec": 0, 00:19:59.905 "fast_io_fail_timeout_sec": 0, 00:19:59.905 "disable_auto_failback": false, 00:19:59.905 "generate_uuids": false, 00:19:59.905 "transport_tos": 0, 00:19:59.905 "nvme_error_stat": false, 00:19:59.905 "rdma_srq_size": 0, 00:19:59.905 "io_path_stat": false, 00:19:59.905 "allow_accel_sequence": false, 00:19:59.905 "rdma_max_cq_size": 0, 00:19:59.905 "rdma_cm_event_timeout_ms": 0, 00:19:59.905 "dhchap_digests": [ 00:19:59.905 "sha256", 00:19:59.905 "sha384", 00:19:59.905 "sha512" 00:19:59.905 ], 00:19:59.905 "dhchap_dhgroups": [ 00:19:59.905 "null", 00:19:59.905 "ffdhe2048", 00:19:59.905 "ffdhe3072", 00:19:59.905 "ffdhe4096", 00:19:59.905 "ffdhe6144", 00:19:59.905 "ffdhe8192" 00:19:59.905 ] 00:19:59.905 } 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "method": "bdev_nvme_attach_controller", 00:19:59.905 "params": { 00:19:59.905 "name": "nvme0", 00:19:59.905 "trtype": "TCP", 00:19:59.905 "adrfam": "IPv4", 00:19:59.905 "traddr": "10.0.0.2", 00:19:59.905 "trsvcid": "4420", 00:19:59.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.905 "prchk_reftag": false, 00:19:59.905 "prchk_guard": false, 00:19:59.905 "ctrlr_loss_timeout_sec": 0, 00:19:59.905 "reconnect_delay_sec": 0, 00:19:59.905 "fast_io_fail_timeout_sec": 0, 00:19:59.905 "psk": "key0", 00:19:59.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.905 "hdgst": false, 00:19:59.905 "ddgst": false, 00:19:59.905 "multipath": "multipath" 00:19:59.905 } 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "method": "bdev_nvme_set_hotplug", 00:19:59.905 "params": { 00:19:59.905 "period_us": 100000, 00:19:59.905 "enable": false 00:19:59.905 } 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "method": "bdev_enable_histogram", 00:19:59.905 "params": { 00:19:59.905 "name": "nvme0n1", 00:19:59.905 "enable": true 00:19:59.905 } 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "method": "bdev_wait_for_examine" 00:19:59.905 } 00:19:59.905 ] 00:19:59.905 }, 00:19:59.905 { 00:19:59.905 "subsystem": "nbd", 00:19:59.905 "config": [] 00:19:59.905 } 00:19:59.905 ] 00:19:59.905 }' 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1250139 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1250139 ']' 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1250139 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1250139 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1250139' 00:19:59.905 killing process with pid 1250139 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1250139 00:19:59.905 Received shutdown signal, test time was about 1.000000 seconds 00:19:59.905 00:19:59.905 Latency(us) 00:19:59.905 [2024-10-08T16:35:53.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.905 [2024-10-08T16:35:53.962Z] =================================================================================================================== 00:19:59.905 [2024-10-08T16:35:53.962Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.905 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1250139 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1249918 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1249918 ']' 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1249918 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1249918 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1249918' 00:20:00.166 killing process with pid 1249918 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1249918 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1249918 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:00.166 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:00.166 "subsystems": [ 00:20:00.166 { 00:20:00.166 "subsystem": "keyring", 00:20:00.166 "config": [ 00:20:00.166 { 00:20:00.166 "method": "keyring_file_add_key", 00:20:00.166 "params": { 00:20:00.166 "name": "key0", 00:20:00.166 "path": "/tmp/tmp.ca1e6yuQlX" 00:20:00.166 } 00:20:00.166 } 00:20:00.166 ] 00:20:00.166 }, 00:20:00.166 { 00:20:00.166 "subsystem": "iobuf", 00:20:00.166 "config": [ 00:20:00.166 { 00:20:00.166 "method": "iobuf_set_options", 00:20:00.166 "params": { 00:20:00.166 "small_pool_count": 8192, 00:20:00.166 "large_pool_count": 1024, 00:20:00.166 "small_bufsize": 8192, 00:20:00.166 "large_bufsize": 135168 00:20:00.166 } 00:20:00.166 } 00:20:00.166 ] 00:20:00.166 }, 00:20:00.166 { 00:20:00.166 "subsystem": "sock", 00:20:00.166 "config": [ 00:20:00.166 { 00:20:00.166 "method": "sock_set_default_impl", 00:20:00.166 "params": { 00:20:00.166 "impl_name": "posix" 00:20:00.166 } 00:20:00.166 }, 00:20:00.166 { 00:20:00.166 "method": "sock_impl_set_options", 00:20:00.166 "params": { 00:20:00.166 "impl_name": "ssl", 00:20:00.166 "recv_buf_size": 4096, 00:20:00.166 "send_buf_size": 4096, 00:20:00.166 "enable_recv_pipe": true, 00:20:00.166 "enable_quickack": false, 00:20:00.166 "enable_placement_id": 0, 00:20:00.166 "enable_zerocopy_send_server": true, 00:20:00.166 "enable_zerocopy_send_client": false, 00:20:00.166 "zerocopy_threshold": 0, 00:20:00.166 "tls_version": 0, 00:20:00.166 "enable_ktls": false 00:20:00.166 } 00:20:00.166 }, 00:20:00.166 { 00:20:00.166 "method": "sock_impl_set_options", 00:20:00.166 "params": { 00:20:00.166 "impl_name": "posix", 00:20:00.166 "recv_buf_size": 2097152, 00:20:00.166 "send_buf_size": 2097152, 00:20:00.166 "enable_recv_pipe": true, 00:20:00.166 "enable_quickack": false, 00:20:00.166 "enable_placement_id": 0, 00:20:00.166 "enable_zerocopy_send_server": true, 00:20:00.166 "enable_zerocopy_send_client": false, 00:20:00.166 "zerocopy_threshold": 0, 00:20:00.166 "tls_version": 0, 00:20:00.166 "enable_ktls": false 00:20:00.166 } 00:20:00.166 } 00:20:00.166 ] 00:20:00.166 }, 00:20:00.166 { 00:20:00.166 "subsystem": "vmd", 00:20:00.166 "config": [] 00:20:00.166 }, 00:20:00.166 { 00:20:00.166 "subsystem": "accel", 00:20:00.166 "config": [ 00:20:00.166 { 00:20:00.166 "method": "accel_set_options", 00:20:00.166 "params": { 00:20:00.166 "small_cache_size": 128, 00:20:00.166 "large_cache_size": 16, 00:20:00.166 "task_count": 2048, 00:20:00.166 "sequence_count": 2048, 00:20:00.166 "buf_count": 2048 00:20:00.166 } 00:20:00.166 } 00:20:00.166 ] 00:20:00.166 }, 00:20:00.166 { 00:20:00.166 "subsystem": "bdev", 00:20:00.166 "config": [ 00:20:00.166 { 00:20:00.166 "method": "bdev_set_options", 00:20:00.166 "params": { 00:20:00.166 "bdev_io_pool_size": 65535, 00:20:00.166 "bdev_io_cache_size": 256, 00:20:00.167 "bdev_auto_examine": true, 00:20:00.167 "iobuf_small_cache_size": 128, 00:20:00.167 "iobuf_large_cache_size": 16 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_raid_set_options", 00:20:00.167 "params": { 00:20:00.167 "process_window_size_kb": 1024, 00:20:00.167 "process_max_bandwidth_mb_sec": 0 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_iscsi_set_options", 00:20:00.167 "params": { 00:20:00.167 "timeout_sec": 30 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_nvme_set_options", 00:20:00.167 "params": { 00:20:00.167 "action_on_timeout": "none", 00:20:00.167 "timeout_us": 0, 00:20:00.167 "timeout_admin_us": 0, 00:20:00.167 "keep_alive_timeout_ms": 10000, 00:20:00.167 "arbitration_burst": 0, 00:20:00.167 "low_priority_weight": 0, 00:20:00.167 "medium_priority_weight": 0, 00:20:00.167 "high_priority_weight": 0, 00:20:00.167 "nvme_adminq_poll_period_us": 10000, 00:20:00.167 "nvme_ioq_poll_period_us": 0, 00:20:00.167 "io_queue_requests": 0, 00:20:00.167 "delay_cmd_submit": true, 00:20:00.167 "transport_retry_count": 4, 00:20:00.167 "bdev_retry_count": 3, 00:20:00.167 "transport_ack_timeout": 0, 00:20:00.167 "ctrlr_loss_timeout_sec": 0, 00:20:00.167 "reconnect_delay_sec": 0, 00:20:00.167 "fast_io_fail_timeout_sec": 0, 00:20:00.167 "disable_auto_failback": false, 00:20:00.167 "generate_uuids": false, 00:20:00.167 "transport_tos": 0, 00:20:00.167 "nvme_error_stat": false, 00:20:00.167 "rdma_srq_size": 0, 00:20:00.167 "io_path_stat": false, 00:20:00.167 "allow_accel_sequence": false, 00:20:00.167 "rdma_max_cq_size": 0, 00:20:00.167 "rdma_cm_event_timeout_ms": 0, 00:20:00.167 "dhchap_digests": [ 00:20:00.167 "sha256", 00:20:00.167 "sha384", 00:20:00.167 "sha512" 00:20:00.167 ], 00:20:00.167 "dhchap_dhgroups": [ 00:20:00.167 "null", 00:20:00.167 "ffdhe2048", 00:20:00.167 "ffdhe3072", 00:20:00.167 "ffdhe4096", 00:20:00.167 "ffdhe6144", 00:20:00.167 "ffdhe8192" 00:20:00.167 ] 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_nvme_set_hotplug", 00:20:00.167 "params": { 00:20:00.167 "period_us": 100000, 00:20:00.167 "enable": false 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_malloc_create", 00:20:00.167 "params": { 00:20:00.167 "name": "malloc0", 00:20:00.167 "num_blocks": 8192, 00:20:00.167 "block_size": 4096, 00:20:00.167 "physical_block_size": 4096, 00:20:00.167 "uuid": "0ef51ece-09d4-4ce4-941b-4434f2fc89f4", 00:20:00.167 "optimal_io_boundary": 0, 00:20:00.167 "md_size": 0, 00:20:00.167 "dif_type": 0, 00:20:00.167 "dif_is_head_of_md": false, 00:20:00.167 "dif_pi_format": 0 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_wait_for_examine" 00:20:00.167 } 00:20:00.167 ] 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "subsystem": "nbd", 00:20:00.167 "config": [] 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "subsystem": "scheduler", 00:20:00.167 "config": [ 00:20:00.167 { 00:20:00.167 "method": "framework_set_scheduler", 00:20:00.167 "params": { 00:20:00.167 "name": "static" 00:20:00.167 } 00:20:00.167 } 00:20:00.167 ] 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "subsystem": "nvmf", 00:20:00.167 "config": [ 00:20:00.167 { 00:20:00.167 "method": "nvmf_set_config", 00:20:00.167 "params": { 00:20:00.167 "discovery_filter": "match_any", 00:20:00.167 "admin_cmd_passthru": { 00:20:00.167 "identify_ctrlr": false 00:20:00.167 }, 00:20:00.167 "dhchap_digests": [ 00:20:00.167 "sha256", 00:20:00.167 "sha384", 00:20:00.167 "sha512" 00:20:00.167 ], 00:20:00.167 "dhchap_dhgroups": [ 00:20:00.167 "null", 00:20:00.167 "ffdhe2048", 00:20:00.167 "ffdhe3072", 00:20:00.167 "ffdhe4096", 00:20:00.167 "ffdhe6144", 00:20:00.167 "ffdhe8192" 00:20:00.167 ] 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "nvmf_set_max_subsystems", 00:20:00.167 "params": { 00:20:00.167 "max_subsystems": 1024 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "nvmf_set_crdt", 00:20:00.167 "params": { 00:20:00.167 "crdt1": 0, 00:20:00.167 "crdt2": 0, 00:20:00.167 "crdt3": 0 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "nvmf_create_transport", 00:20:00.167 "params": { 00:20:00.167 "trtype": "TCP", 00:20:00.167 "max_queue_depth": 128, 00:20:00.167 "max_io_qpairs_per_ctrlr": 127, 00:20:00.167 "in_capsule_data_size": 4096, 00:20:00.167 "max_io_size": 131072, 00:20:00.167 "io_unit_size": 131072, 00:20:00.167 "max_aq_depth": 128, 00:20:00.167 "num_shared_buffers": 511, 00:20:00.167 "buf_cache_size": 4294967295, 00:20:00.167 "dif_insert_or_strip": false, 00:20:00.167 "zcopy": false, 00:20:00.167 "c2h_success": false, 00:20:00.167 "sock_priority": 0, 00:20:00.167 "abort_timeout_sec": 1, 00:20:00.167 "ack_timeout": 0, 00:20:00.167 "data_wr_pool_size": 0 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "nvmf_create_subsystem", 00:20:00.167 "params": { 00:20:00.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.167 "allow_any_host": false, 00:20:00.167 "serial_number": "00000000000000000000", 00:20:00.167 "model_number": "SPDK bdev Controller", 00:20:00.167 "max_namespaces": 32, 00:20:00.167 "min_cntlid": 1, 00:20:00.167 "max_cntlid": 65519, 00:20:00.167 "ana_reporting": false 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "nvmf_subsystem_add_host", 00:20:00.167 "params": { 00:20:00.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.167 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.167 "psk": "key0" 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "nvmf_subsystem_add_ns", 00:20:00.167 "params": { 00:20:00.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.167 "namespace": { 00:20:00.167 "nsid": 1, 00:20:00.167 "bdev_name": "malloc0", 00:20:00.167 "nguid": "0EF51ECE09D44CE4941B4434F2FC89F4", 00:20:00.167 "uuid": "0ef51ece-09d4-4ce4-941b-4434f2fc89f4", 00:20:00.167 "no_auto_visible": false 00:20:00.167 } 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "nvmf_subsystem_add_listener", 00:20:00.167 "params": { 00:20:00.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.167 "listen_address": { 00:20:00.167 "trtype": "TCP", 00:20:00.167 "adrfam": "IPv4", 00:20:00.167 "traddr": "10.0.0.2", 00:20:00.167 "trsvcid": "4420" 00:20:00.167 }, 00:20:00.167 "secure_channel": false, 00:20:00.167 "sock_impl": "ssl" 00:20:00.167 } 00:20:00.167 } 00:20:00.167 ] 00:20:00.167 } 00:20:00.167 ] 00:20:00.167 }' 00:20:00.167 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.427 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1250825 00:20:00.427 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1250825 00:20:00.427 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:00.427 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1250825 ']' 00:20:00.427 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.427 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.427 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.427 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.427 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.427 [2024-10-08 18:35:54.282056] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:20:00.427 [2024-10-08 18:35:54.282113] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.427 [2024-10-08 18:35:54.363593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.427 [2024-10-08 18:35:54.417629] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.427 [2024-10-08 18:35:54.417663] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.427 [2024-10-08 18:35:54.417669] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.427 [2024-10-08 18:35:54.417673] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.427 [2024-10-08 18:35:54.417677] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.427 [2024-10-08 18:35:54.418190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.686 [2024-10-08 18:35:54.618592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.686 [2024-10-08 18:35:54.650623] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.686 [2024-10-08 18:35:54.650824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1250878 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1250878 /var/tmp/bdevperf.sock 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1250878 ']' 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.256 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:01.256 "subsystems": [ 00:20:01.256 { 00:20:01.256 "subsystem": "keyring", 00:20:01.256 "config": [ 00:20:01.256 { 00:20:01.256 "method": "keyring_file_add_key", 00:20:01.256 "params": { 00:20:01.256 "name": "key0", 00:20:01.256 "path": "/tmp/tmp.ca1e6yuQlX" 00:20:01.256 } 00:20:01.256 } 00:20:01.256 ] 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "subsystem": "iobuf", 00:20:01.256 "config": [ 00:20:01.256 { 00:20:01.256 "method": "iobuf_set_options", 00:20:01.256 "params": { 00:20:01.256 "small_pool_count": 8192, 00:20:01.256 "large_pool_count": 1024, 00:20:01.256 "small_bufsize": 8192, 00:20:01.256 "large_bufsize": 135168 00:20:01.256 } 00:20:01.256 } 00:20:01.256 ] 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "subsystem": "sock", 00:20:01.256 "config": [ 00:20:01.256 { 00:20:01.256 "method": "sock_set_default_impl", 00:20:01.256 "params": { 00:20:01.256 "impl_name": "posix" 00:20:01.256 } 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "method": "sock_impl_set_options", 00:20:01.256 "params": { 00:20:01.256 "impl_name": "ssl", 00:20:01.256 "recv_buf_size": 4096, 00:20:01.256 "send_buf_size": 4096, 00:20:01.256 "enable_recv_pipe": true, 00:20:01.256 "enable_quickack": false, 00:20:01.256 "enable_placement_id": 0, 00:20:01.256 "enable_zerocopy_send_server": true, 00:20:01.256 "enable_zerocopy_send_client": false, 00:20:01.256 "zerocopy_threshold": 0, 00:20:01.256 "tls_version": 0, 00:20:01.256 "enable_ktls": false 00:20:01.256 } 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "method": "sock_impl_set_options", 00:20:01.256 "params": { 00:20:01.256 "impl_name": "posix", 00:20:01.256 "recv_buf_size": 2097152, 00:20:01.256 "send_buf_size": 2097152, 00:20:01.256 "enable_recv_pipe": true, 00:20:01.256 "enable_quickack": false, 00:20:01.256 "enable_placement_id": 0, 00:20:01.256 "enable_zerocopy_send_server": true, 00:20:01.256 "enable_zerocopy_send_client": false, 00:20:01.256 "zerocopy_threshold": 0, 00:20:01.256 "tls_version": 0, 00:20:01.256 "enable_ktls": false 00:20:01.256 } 00:20:01.256 } 00:20:01.256 ] 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "subsystem": "vmd", 00:20:01.256 "config": [] 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "subsystem": "accel", 00:20:01.256 "config": [ 00:20:01.256 { 00:20:01.256 "method": "accel_set_options", 00:20:01.256 "params": { 00:20:01.256 "small_cache_size": 128, 00:20:01.256 "large_cache_size": 16, 00:20:01.256 "task_count": 2048, 00:20:01.256 "sequence_count": 2048, 00:20:01.256 "buf_count": 2048 00:20:01.256 } 00:20:01.256 } 00:20:01.256 ] 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "subsystem": "bdev", 00:20:01.256 "config": [ 00:20:01.256 { 00:20:01.256 "method": "bdev_set_options", 00:20:01.256 "params": { 00:20:01.256 "bdev_io_pool_size": 65535, 00:20:01.256 "bdev_io_cache_size": 256, 00:20:01.256 "bdev_auto_examine": true, 00:20:01.256 "iobuf_small_cache_size": 128, 00:20:01.256 "iobuf_large_cache_size": 16 00:20:01.256 } 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "method": "bdev_raid_set_options", 00:20:01.256 "params": { 00:20:01.256 "process_window_size_kb": 1024, 00:20:01.256 "process_max_bandwidth_mb_sec": 0 00:20:01.256 } 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "method": "bdev_iscsi_set_options", 00:20:01.256 "params": { 00:20:01.256 "timeout_sec": 30 00:20:01.256 } 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "method": "bdev_nvme_set_options", 00:20:01.256 "params": { 00:20:01.256 "action_on_timeout": "none", 00:20:01.256 "timeout_us": 0, 00:20:01.256 "timeout_admin_us": 0, 00:20:01.256 "keep_alive_timeout_ms": 10000, 00:20:01.256 "arbitration_burst": 0, 00:20:01.256 "low_priority_weight": 0, 00:20:01.256 "medium_priority_weight": 0, 00:20:01.256 "high_priority_weight": 0, 00:20:01.256 "nvme_adminq_poll_period_us": 10000, 00:20:01.256 "nvme_ioq_poll_period_us": 0, 00:20:01.256 "io_queue_requests": 512, 00:20:01.256 "delay_cmd_submit": true, 00:20:01.256 "transport_retry_count": 4, 00:20:01.256 "bdev_retry_count": 3, 00:20:01.256 "transport_ack_timeout": 0, 00:20:01.256 "ctrlr_loss_timeout_sec": 0, 00:20:01.256 "reconnect_delay_sec": 0, 00:20:01.256 "fast_io_fail_timeout_sec": 0, 00:20:01.256 "disable_auto_failback": false, 00:20:01.256 "generate_uuids": false, 00:20:01.256 "transport_tos": 0, 00:20:01.256 "nvme_error_stat": false, 00:20:01.256 "rdma_srq_size": 0, 00:20:01.256 "io_path_stat": false, 00:20:01.256 "allow_accel_sequence": false, 00:20:01.256 "rdma_max_cq_size": 0, 00:20:01.256 "rdma_cm_event_timeout_ms": 0, 00:20:01.256 "dhchap_digests": [ 00:20:01.256 "sha256", 00:20:01.256 "sha384", 00:20:01.256 "sha512" 00:20:01.256 ], 00:20:01.256 "dhchap_dhgroups": [ 00:20:01.256 "null", 00:20:01.256 "ffdhe2048", 00:20:01.256 "ffdhe3072", 00:20:01.256 "ffdhe4096", 00:20:01.256 "ffdhe6144", 00:20:01.256 "ffdhe8192" 00:20:01.256 ] 00:20:01.256 } 00:20:01.256 }, 00:20:01.256 { 00:20:01.256 "method": "bdev_nvme_attach_controller", 00:20:01.256 "params": { 00:20:01.256 "name": "nvme0", 00:20:01.256 "trtype": "TCP", 00:20:01.256 "adrfam": "IPv4", 00:20:01.256 "traddr": "10.0.0.2", 00:20:01.256 "trsvcid": "4420", 00:20:01.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.256 "prchk_reftag": false, 00:20:01.256 "prchk_guard": false, 00:20:01.256 "ctrlr_loss_timeout_sec": 0, 00:20:01.256 "reconnect_delay_sec": 0, 00:20:01.256 "fast_io_fail_timeout_sec": 0, 00:20:01.256 "psk": "key0", 00:20:01.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.256 "hdgst": false, 00:20:01.256 "ddgst": false, 00:20:01.256 "multipath": "multipath" 00:20:01.257 } 00:20:01.257 }, 00:20:01.257 { 00:20:01.257 "method": "bdev_nvme_set_hotplug", 00:20:01.257 "params": { 00:20:01.257 "period_us": 100000, 00:20:01.257 "enable": false 00:20:01.257 } 00:20:01.257 }, 00:20:01.257 { 00:20:01.257 "method": "bdev_enable_histogram", 00:20:01.257 "params": { 00:20:01.257 "name": "nvme0n1", 00:20:01.257 "enable": true 00:20:01.257 } 00:20:01.257 }, 00:20:01.257 { 00:20:01.257 "method": "bdev_wait_for_examine" 00:20:01.257 } 00:20:01.257 ] 00:20:01.257 }, 00:20:01.257 { 00:20:01.257 "subsystem": "nbd", 00:20:01.257 "config": [] 00:20:01.257 } 00:20:01.257 ] 00:20:01.257 }' 00:20:01.257 [2024-10-08 18:35:55.153498] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:20:01.257 [2024-10-08 18:35:55.153553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250878 ] 00:20:01.257 [2024-10-08 18:35:55.231848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.257 [2024-10-08 18:35:55.286026] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.517 [2024-10-08 18:35:55.421165] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.088 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.088 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:02.088 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.088 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:02.088 18:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.088 18:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.348 Running I/O for 1 seconds... 00:20:03.288 5483.00 IOPS, 21.42 MiB/s 00:20:03.288 Latency(us) 00:20:03.288 [2024-10-08T16:35:57.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.288 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.288 Verification LBA range: start 0x0 length 0x2000 00:20:03.288 nvme0n1 : 1.03 5443.72 21.26 0.00 0.00 23220.94 4587.52 32986.45 00:20:03.288 [2024-10-08T16:35:57.345Z] =================================================================================================================== 00:20:03.288 [2024-10-08T16:35:57.345Z] Total : 5443.72 21.26 0.00 0.00 23220.94 4587.52 32986.45 00:20:03.288 { 00:20:03.288 "results": [ 00:20:03.288 { 00:20:03.288 "job": "nvme0n1", 00:20:03.288 "core_mask": "0x2", 00:20:03.288 "workload": "verify", 00:20:03.288 "status": "finished", 00:20:03.288 "verify_range": { 00:20:03.288 "start": 0, 00:20:03.288 "length": 8192 00:20:03.288 }, 00:20:03.288 "queue_depth": 128, 00:20:03.288 "io_size": 4096, 00:20:03.288 "runtime": 1.030913, 00:20:03.288 "iops": 5443.7183351068425, 00:20:03.288 "mibps": 21.264524746511103, 00:20:03.288 "io_failed": 0, 00:20:03.288 "io_timeout": 0, 00:20:03.288 "avg_latency_us": 23220.94072701354, 00:20:03.288 "min_latency_us": 4587.52, 00:20:03.288 "max_latency_us": 32986.45333333333 00:20:03.288 } 00:20:03.288 ], 00:20:03.288 "core_count": 1 00:20:03.288 } 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:03.288 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:03.288 nvmf_trace.0 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1250878 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1250878 ']' 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1250878 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1250878 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1250878' 00:20:03.549 killing process with pid 1250878 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1250878 00:20:03.549 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.549 00:20:03.549 Latency(us) 00:20:03.549 [2024-10-08T16:35:57.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.549 [2024-10-08T16:35:57.606Z] =================================================================================================================== 00:20:03.549 [2024-10-08T16:35:57.606Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1250878 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:03.549 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:03.549 rmmod nvme_tcp 00:20:03.549 rmmod nvme_fabrics 00:20:03.549 rmmod nvme_keyring 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1250825 ']' 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1250825 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1250825 ']' 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1250825 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1250825 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1250825' 00:20:03.810 killing process with pid 1250825 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1250825 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1250825 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.810 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.354 18:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:06.354 18:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.0TxjBBIGYW /tmp/tmp.04WOmOcDGf /tmp/tmp.ca1e6yuQlX 00:20:06.354 00:20:06.354 real 1m28.995s 00:20:06.354 user 2m20.959s 00:20:06.354 sys 0m26.993s 00:20:06.354 18:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:06.354 18:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.354 ************************************ 00:20:06.354 END TEST nvmf_tls 00:20:06.354 ************************************ 00:20:06.354 18:35:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.354 18:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:06.354 18:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:06.354 18:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:06.354 ************************************ 00:20:06.354 START TEST nvmf_fips 00:20:06.354 ************************************ 00:20:06.354 18:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.354 * Looking for test storage... 00:20:06.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.354 --rc genhtml_branch_coverage=1 00:20:06.354 --rc genhtml_function_coverage=1 00:20:06.354 --rc genhtml_legend=1 00:20:06.354 --rc geninfo_all_blocks=1 00:20:06.354 --rc geninfo_unexecuted_blocks=1 00:20:06.354 00:20:06.354 ' 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.354 --rc genhtml_branch_coverage=1 00:20:06.354 --rc genhtml_function_coverage=1 00:20:06.354 --rc genhtml_legend=1 00:20:06.354 --rc geninfo_all_blocks=1 00:20:06.354 --rc geninfo_unexecuted_blocks=1 00:20:06.354 00:20:06.354 ' 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.354 --rc genhtml_branch_coverage=1 00:20:06.354 --rc genhtml_function_coverage=1 00:20:06.354 --rc genhtml_legend=1 00:20:06.354 --rc geninfo_all_blocks=1 00:20:06.354 --rc geninfo_unexecuted_blocks=1 00:20:06.354 00:20:06.354 ' 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:06.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.354 --rc genhtml_branch_coverage=1 00:20:06.354 --rc genhtml_function_coverage=1 00:20:06.354 --rc genhtml_legend=1 00:20:06.354 --rc geninfo_all_blocks=1 00:20:06.354 --rc geninfo_unexecuted_blocks=1 00:20:06.354 00:20:06.354 ' 00:20:06.354 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:06.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:06.355 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:06.356 Error setting digest 00:20:06.356 4012CB4E047F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:06.356 4012CB4E047F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:06.356 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:14.495 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:14.495 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:14.495 Found net devices under 0000:31:00.0: cvl_0_0 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:14.495 Found net devices under 0000:31:00.1: cvl_0_1 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:14.495 18:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.495 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.495 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.495 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:14.495 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:14.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:20:14.495 00:20:14.495 --- 10.0.0.2 ping statistics --- 00:20:14.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.495 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:20:14.495 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:20:14.495 00:20:14.495 --- 10.0.0.1 ping statistics --- 00:20:14.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.495 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:20:14.495 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.495 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:14.495 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:14.495 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.495 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1256009 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1256009 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1256009 ']' 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.496 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:14.496 [2024-10-08 18:36:08.268621] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:20:14.496 [2024-10-08 18:36:08.268696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.496 [2024-10-08 18:36:08.357777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.496 [2024-10-08 18:36:08.449756] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.496 [2024-10-08 18:36:08.449813] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.496 [2024-10-08 18:36:08.449822] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.496 [2024-10-08 18:36:08.449829] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.496 [2024-10-08 18:36:08.449835] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.496 [2024-10-08 18:36:08.450611] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.067 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.067 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:15.067 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:15.067 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.067 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:15.067 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.067 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:15.067 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:15.328 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:15.328 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.srv 00:20:15.328 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:15.328 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.srv 00:20:15.328 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.srv 00:20:15.328 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.srv 00:20:15.328 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:15.328 [2024-10-08 18:36:09.303702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.328 [2024-10-08 18:36:09.319702] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.328 [2024-10-08 18:36:09.320073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.328 malloc0 00:20:15.589 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.590 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1256304 00:20:15.590 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1256304 /var/tmp/bdevperf.sock 00:20:15.590 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.590 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1256304 ']' 00:20:15.590 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.590 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.590 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.590 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.590 18:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:15.590 [2024-10-08 18:36:09.478050] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:20:15.590 [2024-10-08 18:36:09.478129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256304 ] 00:20:15.590 [2024-10-08 18:36:09.564512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.851 [2024-10-08 18:36:09.656502] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.422 18:36:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.422 18:36:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:16.422 18:36:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.srv 00:20:16.683 18:36:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:16.683 [2024-10-08 18:36:10.663457] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.944 TLSTESTn1 00:20:16.944 18:36:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:16.944 Running I/O for 10 seconds... 00:20:18.825 3387.00 IOPS, 13.23 MiB/s [2024-10-08T16:36:14.265Z] 3707.00 IOPS, 14.48 MiB/s [2024-10-08T16:36:15.205Z] 4461.00 IOPS, 17.43 MiB/s [2024-10-08T16:36:16.144Z] 4620.25 IOPS, 18.05 MiB/s [2024-10-08T16:36:17.084Z] 4745.00 IOPS, 18.54 MiB/s [2024-10-08T16:36:18.023Z] 4836.67 IOPS, 18.89 MiB/s [2024-10-08T16:36:18.963Z] 5057.14 IOPS, 19.75 MiB/s [2024-10-08T16:36:19.904Z] 5127.75 IOPS, 20.03 MiB/s [2024-10-08T16:36:21.287Z] 5212.78 IOPS, 20.36 MiB/s [2024-10-08T16:36:21.287Z] 5308.40 IOPS, 20.74 MiB/s 00:20:27.230 Latency(us) 00:20:27.230 [2024-10-08T16:36:21.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.230 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:27.230 Verification LBA range: start 0x0 length 0x2000 00:20:27.230 TLSTESTn1 : 10.01 5314.83 20.76 0.00 0.00 24047.74 4969.81 33423.36 00:20:27.230 [2024-10-08T16:36:21.287Z] =================================================================================================================== 00:20:27.230 [2024-10-08T16:36:21.287Z] Total : 5314.83 20.76 0.00 0.00 24047.74 4969.81 33423.36 00:20:27.230 { 00:20:27.230 "results": [ 00:20:27.230 { 00:20:27.230 "job": "TLSTESTn1", 00:20:27.230 "core_mask": "0x4", 00:20:27.230 "workload": "verify", 00:20:27.230 "status": "finished", 00:20:27.230 "verify_range": { 00:20:27.230 "start": 0, 00:20:27.230 "length": 8192 00:20:27.230 }, 00:20:27.230 "queue_depth": 128, 00:20:27.230 "io_size": 4096, 00:20:27.230 "runtime": 10.011981, 00:20:27.230 "iops": 5314.8322994220625, 00:20:27.230 "mibps": 20.76106366961743, 00:20:27.230 "io_failed": 0, 00:20:27.230 "io_timeout": 0, 00:20:27.230 "avg_latency_us": 24047.743881330025, 00:20:27.230 "min_latency_us": 4969.8133333333335, 00:20:27.230 "max_latency_us": 33423.36 00:20:27.230 } 00:20:27.230 ], 00:20:27.230 "core_count": 1 00:20:27.230 } 00:20:27.230 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:27.230 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:27.230 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:27.230 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:27.230 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:27.230 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:27.230 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:27.230 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:27.230 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:27.230 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:27.230 nvmf_trace.0 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1256304 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1256304 ']' 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1256304 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1256304 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1256304' 00:20:27.230 killing process with pid 1256304 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1256304 00:20:27.230 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.230 00:20:27.230 Latency(us) 00:20:27.230 [2024-10-08T16:36:21.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.230 [2024-10-08T16:36:21.287Z] =================================================================================================================== 00:20:27.230 [2024-10-08T16:36:21.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1256304 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.230 rmmod nvme_tcp 00:20:27.230 rmmod nvme_fabrics 00:20:27.230 rmmod nvme_keyring 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1256009 ']' 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1256009 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1256009 ']' 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1256009 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.230 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1256009 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1256009' 00:20:27.491 killing process with pid 1256009 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1256009 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1256009 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.491 18:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.srv 00:20:30.038 00:20:30.038 real 0m23.570s 00:20:30.038 user 0m25.018s 00:20:30.038 sys 0m9.934s 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.038 ************************************ 00:20:30.038 END TEST nvmf_fips 00:20:30.038 ************************************ 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:30.038 ************************************ 00:20:30.038 START TEST nvmf_control_msg_list 00:20:30.038 ************************************ 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:30.038 * Looking for test storage... 00:20:30.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:30.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.038 --rc genhtml_branch_coverage=1 00:20:30.038 --rc genhtml_function_coverage=1 00:20:30.038 --rc genhtml_legend=1 00:20:30.038 --rc geninfo_all_blocks=1 00:20:30.038 --rc geninfo_unexecuted_blocks=1 00:20:30.038 00:20:30.038 ' 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:30.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.038 --rc genhtml_branch_coverage=1 00:20:30.038 --rc genhtml_function_coverage=1 00:20:30.038 --rc genhtml_legend=1 00:20:30.038 --rc geninfo_all_blocks=1 00:20:30.038 --rc geninfo_unexecuted_blocks=1 00:20:30.038 00:20:30.038 ' 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:30.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.038 --rc genhtml_branch_coverage=1 00:20:30.038 --rc genhtml_function_coverage=1 00:20:30.038 --rc genhtml_legend=1 00:20:30.038 --rc geninfo_all_blocks=1 00:20:30.038 --rc geninfo_unexecuted_blocks=1 00:20:30.038 00:20:30.038 ' 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:30.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.038 --rc genhtml_branch_coverage=1 00:20:30.038 --rc genhtml_function_coverage=1 00:20:30.038 --rc genhtml_legend=1 00:20:30.038 --rc geninfo_all_blocks=1 00:20:30.038 --rc geninfo_unexecuted_blocks=1 00:20:30.038 00:20:30.038 ' 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.038 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:30.039 18:36:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.184 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:38.185 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:38.185 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:38.185 Found net devices under 0000:31:00.0: cvl_0_0 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:38.185 Found net devices under 0000:31:00.1: cvl_0_1 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:20:38.185 00:20:38.185 --- 10.0.0.2 ping statistics --- 00:20:38.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.185 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:20:38.185 00:20:38.185 --- 10.0.0.1 ping statistics --- 00:20:38.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.185 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1263270 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1263270 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1263270 ']' 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.185 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.185 [2024-10-08 18:36:31.674302] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:20:38.186 [2024-10-08 18:36:31.674366] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.186 [2024-10-08 18:36:31.766710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.186 [2024-10-08 18:36:31.860278] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.186 [2024-10-08 18:36:31.860337] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.186 [2024-10-08 18:36:31.860346] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.186 [2024-10-08 18:36:31.860353] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.186 [2024-10-08 18:36:31.860360] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.186 [2024-10-08 18:36:31.861221] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.447 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.447 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:38.447 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:38.447 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.447 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.708 [2024-10-08 18:36:32.554464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.708 Malloc0 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.708 [2024-10-08 18:36:32.625904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1263371 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1263372 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1263374 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1263371 00:20:38.708 18:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.708 [2024-10-08 18:36:32.716783] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:38.708 [2024-10-08 18:36:32.717046] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:38.708 [2024-10-08 18:36:32.717416] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:40.095 Initializing NVMe Controllers 00:20:40.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:40.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:40.095 Initialization complete. Launching workers. 00:20:40.095 ======================================================== 00:20:40.095 Latency(us) 00:20:40.095 Device Information : IOPS MiB/s Average min max 00:20:40.095 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41321.12 40815.21 41984.73 00:20:40.095 ======================================================== 00:20:40.095 Total : 25.00 0.10 41321.12 40815.21 41984.73 00:20:40.095 00:20:40.095 Initializing NVMe Controllers 00:20:40.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:40.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:40.095 Initialization complete. Launching workers. 00:20:40.095 ======================================================== 00:20:40.095 Latency(us) 00:20:40.095 Device Information : IOPS MiB/s Average min max 00:20:40.095 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1519.00 5.93 658.29 272.85 885.68 00:20:40.095 ======================================================== 00:20:40.095 Total : 1519.00 5.93 658.29 272.85 885.68 00:20:40.095 00:20:40.095 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1263372 00:20:40.096 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1263374 00:20:40.096 Initializing NVMe Controllers 00:20:40.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:40.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:40.096 Initialization complete. Launching workers. 00:20:40.096 ======================================================== 00:20:40.096 Latency(us) 00:20:40.096 Device Information : IOPS MiB/s Average min max 00:20:40.096 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41312.52 40675.61 42268.50 00:20:40.096 ======================================================== 00:20:40.096 Total : 25.00 0.10 41312.52 40675.61 42268.50 00:20:40.096 00:20:40.096 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:40.096 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:40.096 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:40.096 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:40.096 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:40.096 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:40.096 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.096 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:40.096 rmmod nvme_tcp 00:20:40.096 rmmod nvme_fabrics 00:20:40.096 rmmod nvme_keyring 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1263270 ']' 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1263270 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1263270 ']' 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1263270 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1263270 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1263270' 00:20:40.096 killing process with pid 1263270 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1263270 00:20:40.096 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1263270 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.357 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.906 00:20:42.906 real 0m12.735s 00:20:42.906 user 0m8.055s 00:20:42.906 sys 0m6.817s 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.906 ************************************ 00:20:42.906 END TEST nvmf_control_msg_list 00:20:42.906 ************************************ 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.906 ************************************ 00:20:42.906 START TEST nvmf_wait_for_buf 00:20:42.906 ************************************ 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:42.906 * Looking for test storage... 00:20:42.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.906 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:42.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.907 --rc genhtml_branch_coverage=1 00:20:42.907 --rc genhtml_function_coverage=1 00:20:42.907 --rc genhtml_legend=1 00:20:42.907 --rc geninfo_all_blocks=1 00:20:42.907 --rc geninfo_unexecuted_blocks=1 00:20:42.907 00:20:42.907 ' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:42.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.907 --rc genhtml_branch_coverage=1 00:20:42.907 --rc genhtml_function_coverage=1 00:20:42.907 --rc genhtml_legend=1 00:20:42.907 --rc geninfo_all_blocks=1 00:20:42.907 --rc geninfo_unexecuted_blocks=1 00:20:42.907 00:20:42.907 ' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:42.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.907 --rc genhtml_branch_coverage=1 00:20:42.907 --rc genhtml_function_coverage=1 00:20:42.907 --rc genhtml_legend=1 00:20:42.907 --rc geninfo_all_blocks=1 00:20:42.907 --rc geninfo_unexecuted_blocks=1 00:20:42.907 00:20:42.907 ' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:42.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.907 --rc genhtml_branch_coverage=1 00:20:42.907 --rc genhtml_function_coverage=1 00:20:42.907 --rc genhtml_legend=1 00:20:42.907 --rc geninfo_all_blocks=1 00:20:42.907 --rc geninfo_unexecuted_blocks=1 00:20:42.907 00:20:42.907 ' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.907 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:51.052 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:51.052 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:51.052 Found net devices under 0000:31:00.0: cvl_0_0 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.052 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:51.052 Found net devices under 0000:31:00.1: cvl_0_1 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:51.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:20:51.053 00:20:51.053 --- 10.0.0.2 ping statistics --- 00:20:51.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.053 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:20:51.053 00:20:51.053 --- 10.0.0.1 ping statistics --- 00:20:51.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.053 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1268025 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1268025 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1268025 ']' 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.053 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.053 [2024-10-08 18:36:44.503097] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:20:51.053 [2024-10-08 18:36:44.503163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.053 [2024-10-08 18:36:44.578334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.053 [2024-10-08 18:36:44.672836] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.053 [2024-10-08 18:36:44.672900] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.053 [2024-10-08 18:36:44.672908] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.053 [2024-10-08 18:36:44.672915] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.053 [2024-10-08 18:36:44.672921] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.053 [2024-10-08 18:36:44.673709] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.314 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:51.314 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:20:51.314 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:51.314 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:51.314 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.575 Malloc0 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.575 [2024-10-08 18:36:45.497856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.575 [2024-10-08 18:36:45.534218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.575 18:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:51.575 [2024-10-08 18:36:45.617088] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:53.489 Initializing NVMe Controllers 00:20:53.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:53.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:53.489 Initialization complete. Launching workers. 00:20:53.489 ======================================================== 00:20:53.489 Latency(us) 00:20:53.489 Device Information : IOPS MiB/s Average min max 00:20:53.489 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32240.93 8010.63 63855.43 00:20:53.489 ======================================================== 00:20:53.489 Total : 129.00 16.12 32240.93 8010.63 63855.43 00:20:53.489 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:53.489 rmmod nvme_tcp 00:20:53.489 rmmod nvme_fabrics 00:20:53.489 rmmod nvme_keyring 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1268025 ']' 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1268025 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1268025 ']' 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1268025 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1268025 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1268025' 00:20:53.489 killing process with pid 1268025 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1268025 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1268025 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.489 18:36:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.033 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:56.033 00:20:56.033 real 0m13.050s 00:20:56.033 user 0m5.215s 00:20:56.033 sys 0m6.401s 00:20:56.033 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:56.033 18:36:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.033 ************************************ 00:20:56.033 END TEST nvmf_wait_for_buf 00:20:56.033 ************************************ 00:20:56.033 18:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:56.033 18:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:56.033 18:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:56.033 18:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:56.033 18:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.033 18:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:04.168 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:04.168 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:04.168 Found net devices under 0000:31:00.0: cvl_0_0 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:04.168 Found net devices under 0000:31:00.1: cvl_0_1 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:04.168 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.169 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:04.169 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:04.169 18:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:04.169 18:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:04.169 18:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.169 ************************************ 00:21:04.169 START TEST nvmf_perf_adq 00:21:04.169 ************************************ 00:21:04.169 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:04.169 * Looking for test storage... 00:21:04.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.169 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:04.169 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:21:04.169 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.169 --rc genhtml_branch_coverage=1 00:21:04.169 --rc genhtml_function_coverage=1 00:21:04.169 --rc genhtml_legend=1 00:21:04.169 --rc geninfo_all_blocks=1 00:21:04.169 --rc geninfo_unexecuted_blocks=1 00:21:04.169 00:21:04.169 ' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.169 --rc genhtml_branch_coverage=1 00:21:04.169 --rc genhtml_function_coverage=1 00:21:04.169 --rc genhtml_legend=1 00:21:04.169 --rc geninfo_all_blocks=1 00:21:04.169 --rc geninfo_unexecuted_blocks=1 00:21:04.169 00:21:04.169 ' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.169 --rc genhtml_branch_coverage=1 00:21:04.169 --rc genhtml_function_coverage=1 00:21:04.169 --rc genhtml_legend=1 00:21:04.169 --rc geninfo_all_blocks=1 00:21:04.169 --rc geninfo_unexecuted_blocks=1 00:21:04.169 00:21:04.169 ' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.169 --rc genhtml_branch_coverage=1 00:21:04.169 --rc genhtml_function_coverage=1 00:21:04.169 --rc genhtml_legend=1 00:21:04.169 --rc geninfo_all_blocks=1 00:21:04.169 --rc geninfo_unexecuted_blocks=1 00:21:04.169 00:21:04.169 ' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.169 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:04.170 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.170 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:10.752 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:10.752 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:10.752 Found net devices under 0000:31:00.0: cvl_0_0 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:10.752 Found net devices under 0000:31:00.1: cvl_0_1 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:10.752 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:12.138 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:14.683 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.978 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:19.979 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:19.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:19.979 Found net devices under 0000:31:00.0: cvl_0_0 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:19.979 Found net devices under 0000:31:00.1: cvl_0_1 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:21:19.979 00:21:19.979 --- 10.0.0.2 ping statistics --- 00:21:19.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.979 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:21:19.979 00:21:19.979 --- 10.0.0.1 ping statistics --- 00:21:19.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.979 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:19.979 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1278404 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1278404 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1278404 ']' 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.980 18:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.980 [2024-10-08 18:37:13.705504] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:21:19.980 [2024-10-08 18:37:13.705570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.980 [2024-10-08 18:37:13.795642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.980 [2024-10-08 18:37:13.892008] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.980 [2024-10-08 18:37:13.892069] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.980 [2024-10-08 18:37:13.892082] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.980 [2024-10-08 18:37:13.892089] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.980 [2024-10-08 18:37:13.892095] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.980 [2024-10-08 18:37:13.894168] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.980 [2024-10-08 18:37:13.894327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.980 [2024-10-08 18:37:13.894489] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.980 [2024-10-08 18:37:13.894490] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.552 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.819 [2024-10-08 18:37:14.743426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.819 Malloc1 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.819 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:20.820 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.820 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.820 [2024-10-08 18:37:14.809128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.820 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.820 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1278754 00:21:20.820 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:20.820 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:22.994 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:22.994 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.994 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.994 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.994 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:22.994 "tick_rate": 2400000000, 00:21:22.994 "poll_groups": [ 00:21:22.994 { 00:21:22.994 "name": "nvmf_tgt_poll_group_000", 00:21:22.994 "admin_qpairs": 1, 00:21:22.994 "io_qpairs": 1, 00:21:22.994 "current_admin_qpairs": 1, 00:21:22.994 "current_io_qpairs": 1, 00:21:22.994 "pending_bdev_io": 0, 00:21:22.994 "completed_nvme_io": 17041, 00:21:22.994 "transports": [ 00:21:22.994 { 00:21:22.994 "trtype": "TCP" 00:21:22.994 } 00:21:22.994 ] 00:21:22.994 }, 00:21:22.994 { 00:21:22.994 "name": "nvmf_tgt_poll_group_001", 00:21:22.994 "admin_qpairs": 0, 00:21:22.994 "io_qpairs": 1, 00:21:22.994 "current_admin_qpairs": 0, 00:21:22.994 "current_io_qpairs": 1, 00:21:22.994 "pending_bdev_io": 0, 00:21:22.994 "completed_nvme_io": 17704, 00:21:22.994 "transports": [ 00:21:22.994 { 00:21:22.994 "trtype": "TCP" 00:21:22.994 } 00:21:22.994 ] 00:21:22.994 }, 00:21:22.994 { 00:21:22.994 "name": "nvmf_tgt_poll_group_002", 00:21:22.994 "admin_qpairs": 0, 00:21:22.994 "io_qpairs": 1, 00:21:22.994 "current_admin_qpairs": 0, 00:21:22.994 "current_io_qpairs": 1, 00:21:22.994 "pending_bdev_io": 0, 00:21:22.994 "completed_nvme_io": 17187, 00:21:22.994 "transports": [ 00:21:22.994 { 00:21:22.994 "trtype": "TCP" 00:21:22.994 } 00:21:22.994 ] 00:21:22.994 }, 00:21:22.994 { 00:21:22.994 "name": "nvmf_tgt_poll_group_003", 00:21:22.994 "admin_qpairs": 0, 00:21:22.994 "io_qpairs": 1, 00:21:22.994 "current_admin_qpairs": 0, 00:21:22.994 "current_io_qpairs": 1, 00:21:22.994 "pending_bdev_io": 0, 00:21:22.994 "completed_nvme_io": 17052, 00:21:22.994 "transports": [ 00:21:22.994 { 00:21:22.994 "trtype": "TCP" 00:21:22.994 } 00:21:22.994 ] 00:21:22.994 } 00:21:22.994 ] 00:21:22.994 }' 00:21:22.994 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:22.994 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:22.994 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:22.994 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:22.994 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1278754 00:21:31.132 Initializing NVMe Controllers 00:21:31.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:31.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:31.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:31.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:31.132 Initialization complete. Launching workers. 00:21:31.132 ======================================================== 00:21:31.132 Latency(us) 00:21:31.132 Device Information : IOPS MiB/s Average min max 00:21:31.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12112.87 47.32 5283.82 1214.98 10912.16 00:21:31.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13376.25 52.25 4784.48 994.11 13217.82 00:21:31.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13550.45 52.93 4722.32 1254.60 12012.00 00:21:31.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12679.06 49.53 5046.90 1086.54 13549.04 00:21:31.132 ======================================================== 00:21:31.132 Total : 51718.64 202.03 4949.48 994.11 13549.04 00:21:31.132 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.132 rmmod nvme_tcp 00:21:31.132 rmmod nvme_fabrics 00:21:31.132 rmmod nvme_keyring 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1278404 ']' 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1278404 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1278404 ']' 00:21:31.132 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1278404 00:21:31.132 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:31.132 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:31.132 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1278404 00:21:31.132 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:31.132 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:31.132 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1278404' 00:21:31.132 killing process with pid 1278404 00:21:31.132 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1278404 00:21:31.132 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1278404 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.392 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.304 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.304 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:33.304 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:33.304 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:35.217 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:37.760 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:43.050 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.050 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:43.051 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:43.051 Found net devices under 0000:31:00.0: cvl_0_0 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:43.051 Found net devices under 0000:31:00.1: cvl_0_1 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:43.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:21:43.051 00:21:43.051 --- 10.0.0.2 ping statistics --- 00:21:43.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.051 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:21:43.051 00:21:43.051 --- 10.0.0.1 ping statistics --- 00:21:43.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.051 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:43.051 net.core.busy_poll = 1 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:43.051 net.core.busy_read = 1 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1283250 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1283250 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1283250 ']' 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.051 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.051 [2024-10-08 18:37:36.981795] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:21:43.051 [2024-10-08 18:37:36.981862] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.051 [2024-10-08 18:37:37.074793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:43.313 [2024-10-08 18:37:37.171629] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.313 [2024-10-08 18:37:37.171692] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.313 [2024-10-08 18:37:37.171702] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.313 [2024-10-08 18:37:37.171709] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.313 [2024-10-08 18:37:37.171715] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.313 [2024-10-08 18:37:37.173960] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.313 [2024-10-08 18:37:37.174131] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.313 [2024-10-08 18:37:37.174441] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.313 [2024-10-08 18:37:37.174445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.885 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.147 [2024-10-08 18:37:38.012937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.147 Malloc1 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.147 [2024-10-08 18:37:38.078729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1283594 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:44.147 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:46.060 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:46.060 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.060 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.060 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.060 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:46.060 "tick_rate": 2400000000, 00:21:46.060 "poll_groups": [ 00:21:46.060 { 00:21:46.060 "name": "nvmf_tgt_poll_group_000", 00:21:46.060 "admin_qpairs": 1, 00:21:46.060 "io_qpairs": 0, 00:21:46.060 "current_admin_qpairs": 1, 00:21:46.060 "current_io_qpairs": 0, 00:21:46.060 "pending_bdev_io": 0, 00:21:46.060 "completed_nvme_io": 0, 00:21:46.060 "transports": [ 00:21:46.060 { 00:21:46.060 "trtype": "TCP" 00:21:46.060 } 00:21:46.060 ] 00:21:46.060 }, 00:21:46.060 { 00:21:46.060 "name": "nvmf_tgt_poll_group_001", 00:21:46.060 "admin_qpairs": 0, 00:21:46.060 "io_qpairs": 4, 00:21:46.060 "current_admin_qpairs": 0, 00:21:46.060 "current_io_qpairs": 4, 00:21:46.060 "pending_bdev_io": 0, 00:21:46.060 "completed_nvme_io": 41878, 00:21:46.060 "transports": [ 00:21:46.060 { 00:21:46.060 "trtype": "TCP" 00:21:46.060 } 00:21:46.060 ] 00:21:46.060 }, 00:21:46.060 { 00:21:46.060 "name": "nvmf_tgt_poll_group_002", 00:21:46.060 "admin_qpairs": 0, 00:21:46.060 "io_qpairs": 0, 00:21:46.060 "current_admin_qpairs": 0, 00:21:46.060 "current_io_qpairs": 0, 00:21:46.060 "pending_bdev_io": 0, 00:21:46.060 "completed_nvme_io": 0, 00:21:46.060 "transports": [ 00:21:46.060 { 00:21:46.060 "trtype": "TCP" 00:21:46.060 } 00:21:46.060 ] 00:21:46.060 }, 00:21:46.060 { 00:21:46.060 "name": "nvmf_tgt_poll_group_003", 00:21:46.060 "admin_qpairs": 0, 00:21:46.060 "io_qpairs": 0, 00:21:46.060 "current_admin_qpairs": 0, 00:21:46.060 "current_io_qpairs": 0, 00:21:46.060 "pending_bdev_io": 0, 00:21:46.060 "completed_nvme_io": 0, 00:21:46.060 "transports": [ 00:21:46.060 { 00:21:46.060 "trtype": "TCP" 00:21:46.060 } 00:21:46.060 ] 00:21:46.060 } 00:21:46.060 ] 00:21:46.060 }' 00:21:46.060 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:46.060 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:46.321 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:21:46.321 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:21:46.321 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1283594 00:21:54.503 Initializing NVMe Controllers 00:21:54.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:54.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:54.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:54.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:54.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:54.503 Initialization complete. Launching workers. 00:21:54.503 ======================================================== 00:21:54.503 Latency(us) 00:21:54.503 Device Information : IOPS MiB/s Average min max 00:21:54.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8636.70 33.74 7410.03 991.62 58095.65 00:21:54.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5244.20 20.49 12205.88 1387.42 58667.81 00:21:54.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5269.90 20.59 12185.23 1386.84 56558.42 00:21:54.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6718.40 26.24 9527.08 1262.07 56867.10 00:21:54.503 ======================================================== 00:21:54.503 Total : 25869.19 101.05 9904.83 991.62 58667.81 00:21:54.503 00:21:54.503 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:54.503 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:54.503 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:54.503 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.503 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:54.503 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.503 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.503 rmmod nvme_tcp 00:21:54.504 rmmod nvme_fabrics 00:21:54.504 rmmod nvme_keyring 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1283250 ']' 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1283250 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1283250 ']' 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1283250 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1283250 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1283250' 00:21:54.504 killing process with pid 1283250 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1283250 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1283250 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.504 18:37:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:57.047 00:21:57.047 real 0m53.691s 00:21:57.047 user 2m48.532s 00:21:57.047 sys 0m12.320s 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.047 ************************************ 00:21:57.047 END TEST nvmf_perf_adq 00:21:57.047 ************************************ 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:57.047 ************************************ 00:21:57.047 START TEST nvmf_shutdown 00:21:57.047 ************************************ 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:57.047 * Looking for test storage... 00:21:57.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.047 --rc genhtml_branch_coverage=1 00:21:57.047 --rc genhtml_function_coverage=1 00:21:57.047 --rc genhtml_legend=1 00:21:57.047 --rc geninfo_all_blocks=1 00:21:57.047 --rc geninfo_unexecuted_blocks=1 00:21:57.047 00:21:57.047 ' 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.047 --rc genhtml_branch_coverage=1 00:21:57.047 --rc genhtml_function_coverage=1 00:21:57.047 --rc genhtml_legend=1 00:21:57.047 --rc geninfo_all_blocks=1 00:21:57.047 --rc geninfo_unexecuted_blocks=1 00:21:57.047 00:21:57.047 ' 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.047 --rc genhtml_branch_coverage=1 00:21:57.047 --rc genhtml_function_coverage=1 00:21:57.047 --rc genhtml_legend=1 00:21:57.047 --rc geninfo_all_blocks=1 00:21:57.047 --rc geninfo_unexecuted_blocks=1 00:21:57.047 00:21:57.047 ' 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.047 --rc genhtml_branch_coverage=1 00:21:57.047 --rc genhtml_function_coverage=1 00:21:57.047 --rc genhtml_legend=1 00:21:57.047 --rc geninfo_all_blocks=1 00:21:57.047 --rc geninfo_unexecuted_blocks=1 00:21:57.047 00:21:57.047 ' 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.047 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:57.048 ************************************ 00:21:57.048 START TEST nvmf_shutdown_tc1 00:21:57.048 ************************************ 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.048 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.181 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:05.182 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:05.182 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:05.182 Found net devices under 0000:31:00.0: cvl_0_0 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:05.182 Found net devices under 0000:31:00.1: cvl_0_1 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:05.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:22:05.182 00:22:05.182 --- 10.0.0.2 ping statistics --- 00:22:05.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.182 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:22:05.182 00:22:05.182 --- 10.0.0.1 ping statistics --- 00:22:05.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.182 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1290037 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1290037 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1290037 ']' 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.182 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.183 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.183 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.183 18:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.183 [2024-10-08 18:37:58.799068] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:22:05.183 [2024-10-08 18:37:58.799131] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.183 [2024-10-08 18:37:58.891693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.183 [2024-10-08 18:37:58.988650] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.183 [2024-10-08 18:37:58.988707] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.183 [2024-10-08 18:37:58.988716] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.183 [2024-10-08 18:37:58.988723] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.183 [2024-10-08 18:37:58.988730] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.183 [2024-10-08 18:37:58.991052] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.183 [2024-10-08 18:37:58.991187] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.183 [2024-10-08 18:37:58.991322] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:05.183 [2024-10-08 18:37:58.991323] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.754 [2024-10-08 18:37:59.676030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.754 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.754 Malloc1 00:22:05.754 [2024-10-08 18:37:59.789715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.014 Malloc2 00:22:06.014 Malloc3 00:22:06.014 Malloc4 00:22:06.014 Malloc5 00:22:06.014 Malloc6 00:22:06.014 Malloc7 00:22:06.276 Malloc8 00:22:06.276 Malloc9 00:22:06.276 Malloc10 00:22:06.276 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.276 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:06.276 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:06.276 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:06.276 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1290290 00:22:06.276 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1290290 /var/tmp/bdevperf.sock 00:22:06.276 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1290290 ']' 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:06.277 { 00:22:06.277 "params": { 00:22:06.277 "name": "Nvme$subsystem", 00:22:06.277 "trtype": "$TEST_TRANSPORT", 00:22:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.277 "adrfam": "ipv4", 00:22:06.277 "trsvcid": "$NVMF_PORT", 00:22:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.277 "hdgst": ${hdgst:-false}, 00:22:06.277 "ddgst": ${ddgst:-false} 00:22:06.277 }, 00:22:06.277 "method": "bdev_nvme_attach_controller" 00:22:06.277 } 00:22:06.277 EOF 00:22:06.277 )") 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:06.277 { 00:22:06.277 "params": { 00:22:06.277 "name": "Nvme$subsystem", 00:22:06.277 "trtype": "$TEST_TRANSPORT", 00:22:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.277 "adrfam": "ipv4", 00:22:06.277 "trsvcid": "$NVMF_PORT", 00:22:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.277 "hdgst": ${hdgst:-false}, 00:22:06.277 "ddgst": ${ddgst:-false} 00:22:06.277 }, 00:22:06.277 "method": "bdev_nvme_attach_controller" 00:22:06.277 } 00:22:06.277 EOF 00:22:06.277 )") 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:06.277 { 00:22:06.277 "params": { 00:22:06.277 "name": "Nvme$subsystem", 00:22:06.277 "trtype": "$TEST_TRANSPORT", 00:22:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.277 "adrfam": "ipv4", 00:22:06.277 "trsvcid": "$NVMF_PORT", 00:22:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.277 "hdgst": ${hdgst:-false}, 00:22:06.277 "ddgst": ${ddgst:-false} 00:22:06.277 }, 00:22:06.277 "method": "bdev_nvme_attach_controller" 00:22:06.277 } 00:22:06.277 EOF 00:22:06.277 )") 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:06.277 { 00:22:06.277 "params": { 00:22:06.277 "name": "Nvme$subsystem", 00:22:06.277 "trtype": "$TEST_TRANSPORT", 00:22:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.277 "adrfam": "ipv4", 00:22:06.277 "trsvcid": "$NVMF_PORT", 00:22:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.277 "hdgst": ${hdgst:-false}, 00:22:06.277 "ddgst": ${ddgst:-false} 00:22:06.277 }, 00:22:06.277 "method": "bdev_nvme_attach_controller" 00:22:06.277 } 00:22:06.277 EOF 00:22:06.277 )") 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:06.277 { 00:22:06.277 "params": { 00:22:06.277 "name": "Nvme$subsystem", 00:22:06.277 "trtype": "$TEST_TRANSPORT", 00:22:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.277 "adrfam": "ipv4", 00:22:06.277 "trsvcid": "$NVMF_PORT", 00:22:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.277 "hdgst": ${hdgst:-false}, 00:22:06.277 "ddgst": ${ddgst:-false} 00:22:06.277 }, 00:22:06.277 "method": "bdev_nvme_attach_controller" 00:22:06.277 } 00:22:06.277 EOF 00:22:06.277 )") 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:06.277 { 00:22:06.277 "params": { 00:22:06.277 "name": "Nvme$subsystem", 00:22:06.277 "trtype": "$TEST_TRANSPORT", 00:22:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.277 "adrfam": "ipv4", 00:22:06.277 "trsvcid": "$NVMF_PORT", 00:22:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.277 "hdgst": ${hdgst:-false}, 00:22:06.277 "ddgst": ${ddgst:-false} 00:22:06.277 }, 00:22:06.277 "method": "bdev_nvme_attach_controller" 00:22:06.277 } 00:22:06.277 EOF 00:22:06.277 )") 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:06.277 [2024-10-08 18:38:00.307643] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:22:06.277 [2024-10-08 18:38:00.307715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:06.277 { 00:22:06.277 "params": { 00:22:06.277 "name": "Nvme$subsystem", 00:22:06.277 "trtype": "$TEST_TRANSPORT", 00:22:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.277 "adrfam": "ipv4", 00:22:06.277 "trsvcid": "$NVMF_PORT", 00:22:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.277 "hdgst": ${hdgst:-false}, 00:22:06.277 "ddgst": ${ddgst:-false} 00:22:06.277 }, 00:22:06.277 "method": "bdev_nvme_attach_controller" 00:22:06.277 } 00:22:06.277 EOF 00:22:06.277 )") 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:06.277 { 00:22:06.277 "params": { 00:22:06.277 "name": "Nvme$subsystem", 00:22:06.277 "trtype": "$TEST_TRANSPORT", 00:22:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.277 "adrfam": "ipv4", 00:22:06.277 "trsvcid": "$NVMF_PORT", 00:22:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.277 "hdgst": ${hdgst:-false}, 00:22:06.277 "ddgst": ${ddgst:-false} 00:22:06.277 }, 00:22:06.277 "method": "bdev_nvme_attach_controller" 00:22:06.277 } 00:22:06.277 EOF 00:22:06.277 )") 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:06.277 { 00:22:06.277 "params": { 00:22:06.277 "name": "Nvme$subsystem", 00:22:06.277 "trtype": "$TEST_TRANSPORT", 00:22:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.277 "adrfam": "ipv4", 00:22:06.277 "trsvcid": "$NVMF_PORT", 00:22:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.277 "hdgst": ${hdgst:-false}, 00:22:06.277 "ddgst": ${ddgst:-false} 00:22:06.277 }, 00:22:06.277 "method": "bdev_nvme_attach_controller" 00:22:06.277 } 00:22:06.277 EOF 00:22:06.277 )") 00:22:06.277 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:06.539 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:06.539 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:06.539 { 00:22:06.539 "params": { 00:22:06.539 "name": "Nvme$subsystem", 00:22:06.539 "trtype": "$TEST_TRANSPORT", 00:22:06.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.539 "adrfam": "ipv4", 00:22:06.539 "trsvcid": "$NVMF_PORT", 00:22:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.539 "hdgst": ${hdgst:-false}, 00:22:06.539 "ddgst": ${ddgst:-false} 00:22:06.539 }, 00:22:06.539 "method": "bdev_nvme_attach_controller" 00:22:06.539 } 00:22:06.539 EOF 00:22:06.539 )") 00:22:06.539 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:06.539 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:06.539 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:06.539 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:06.539 "params": { 00:22:06.539 "name": "Nvme1", 00:22:06.539 "trtype": "tcp", 00:22:06.539 "traddr": "10.0.0.2", 00:22:06.539 "adrfam": "ipv4", 00:22:06.539 "trsvcid": "4420", 00:22:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.539 "hdgst": false, 00:22:06.539 "ddgst": false 00:22:06.539 }, 00:22:06.539 "method": "bdev_nvme_attach_controller" 00:22:06.539 },{ 00:22:06.539 "params": { 00:22:06.539 "name": "Nvme2", 00:22:06.539 "trtype": "tcp", 00:22:06.539 "traddr": "10.0.0.2", 00:22:06.539 "adrfam": "ipv4", 00:22:06.539 "trsvcid": "4420", 00:22:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:06.539 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:06.539 "hdgst": false, 00:22:06.539 "ddgst": false 00:22:06.539 }, 00:22:06.539 "method": "bdev_nvme_attach_controller" 00:22:06.539 },{ 00:22:06.539 "params": { 00:22:06.539 "name": "Nvme3", 00:22:06.539 "trtype": "tcp", 00:22:06.539 "traddr": "10.0.0.2", 00:22:06.539 "adrfam": "ipv4", 00:22:06.539 "trsvcid": "4420", 00:22:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:06.539 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:06.539 "hdgst": false, 00:22:06.539 "ddgst": false 00:22:06.539 }, 00:22:06.539 "method": "bdev_nvme_attach_controller" 00:22:06.539 },{ 00:22:06.539 "params": { 00:22:06.539 "name": "Nvme4", 00:22:06.539 "trtype": "tcp", 00:22:06.539 "traddr": "10.0.0.2", 00:22:06.539 "adrfam": "ipv4", 00:22:06.539 "trsvcid": "4420", 00:22:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:06.539 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:06.539 "hdgst": false, 00:22:06.539 "ddgst": false 00:22:06.539 }, 00:22:06.539 "method": "bdev_nvme_attach_controller" 00:22:06.539 },{ 00:22:06.539 "params": { 00:22:06.539 "name": "Nvme5", 00:22:06.539 "trtype": "tcp", 00:22:06.539 "traddr": "10.0.0.2", 00:22:06.539 "adrfam": "ipv4", 00:22:06.539 "trsvcid": "4420", 00:22:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:06.539 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:06.539 "hdgst": false, 00:22:06.539 "ddgst": false 00:22:06.539 }, 00:22:06.539 "method": "bdev_nvme_attach_controller" 00:22:06.539 },{ 00:22:06.539 "params": { 00:22:06.539 "name": "Nvme6", 00:22:06.539 "trtype": "tcp", 00:22:06.539 "traddr": "10.0.0.2", 00:22:06.539 "adrfam": "ipv4", 00:22:06.539 "trsvcid": "4420", 00:22:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:06.539 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:06.539 "hdgst": false, 00:22:06.539 "ddgst": false 00:22:06.539 }, 00:22:06.539 "method": "bdev_nvme_attach_controller" 00:22:06.539 },{ 00:22:06.539 "params": { 00:22:06.539 "name": "Nvme7", 00:22:06.539 "trtype": "tcp", 00:22:06.539 "traddr": "10.0.0.2", 00:22:06.539 "adrfam": "ipv4", 00:22:06.539 "trsvcid": "4420", 00:22:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:06.539 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:06.539 "hdgst": false, 00:22:06.539 "ddgst": false 00:22:06.539 }, 00:22:06.539 "method": "bdev_nvme_attach_controller" 00:22:06.539 },{ 00:22:06.539 "params": { 00:22:06.539 "name": "Nvme8", 00:22:06.539 "trtype": "tcp", 00:22:06.539 "traddr": "10.0.0.2", 00:22:06.539 "adrfam": "ipv4", 00:22:06.539 "trsvcid": "4420", 00:22:06.539 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:06.539 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:06.539 "hdgst": false, 00:22:06.539 "ddgst": false 00:22:06.539 }, 00:22:06.539 "method": "bdev_nvme_attach_controller" 00:22:06.539 },{ 00:22:06.539 "params": { 00:22:06.539 "name": "Nvme9", 00:22:06.539 "trtype": "tcp", 00:22:06.539 "traddr": "10.0.0.2", 00:22:06.539 "adrfam": "ipv4", 00:22:06.539 "trsvcid": "4420", 00:22:06.540 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:06.540 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:06.540 "hdgst": false, 00:22:06.540 "ddgst": false 00:22:06.540 }, 00:22:06.540 "method": "bdev_nvme_attach_controller" 00:22:06.540 },{ 00:22:06.540 "params": { 00:22:06.540 "name": "Nvme10", 00:22:06.540 "trtype": "tcp", 00:22:06.540 "traddr": "10.0.0.2", 00:22:06.540 "adrfam": "ipv4", 00:22:06.540 "trsvcid": "4420", 00:22:06.540 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:06.540 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:06.540 "hdgst": false, 00:22:06.540 "ddgst": false 00:22:06.540 }, 00:22:06.540 "method": "bdev_nvme_attach_controller" 00:22:06.540 }' 00:22:06.540 [2024-10-08 18:38:00.394032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.540 [2024-10-08 18:38:00.491448] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.924 18:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.924 18:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:07.924 18:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:07.924 18:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.924 18:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:08.184 18:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.184 18:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1290290 00:22:08.184 18:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:08.184 18:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:09.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1290290 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:09.125 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1290037 00:22:09.125 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:09.125 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:09.125 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:09.125 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:09.125 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.125 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.125 { 00:22:09.125 "params": { 00:22:09.125 "name": "Nvme$subsystem", 00:22:09.125 "trtype": "$TEST_TRANSPORT", 00:22:09.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.125 "adrfam": "ipv4", 00:22:09.125 "trsvcid": "$NVMF_PORT", 00:22:09.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.125 "hdgst": ${hdgst:-false}, 00:22:09.125 "ddgst": ${ddgst:-false} 00:22:09.125 }, 00:22:09.125 "method": "bdev_nvme_attach_controller" 00:22:09.125 } 00:22:09.125 EOF 00:22:09.125 )") 00:22:09.125 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:09.125 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.125 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.125 { 00:22:09.125 "params": { 00:22:09.125 "name": "Nvme$subsystem", 00:22:09.125 "trtype": "$TEST_TRANSPORT", 00:22:09.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.125 "adrfam": "ipv4", 00:22:09.125 "trsvcid": "$NVMF_PORT", 00:22:09.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.125 "hdgst": ${hdgst:-false}, 00:22:09.125 "ddgst": ${ddgst:-false} 00:22:09.125 }, 00:22:09.125 "method": "bdev_nvme_attach_controller" 00:22:09.125 } 00:22:09.125 EOF 00:22:09.125 )") 00:22:09.125 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:09.125 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.125 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.125 { 00:22:09.125 "params": { 00:22:09.125 "name": "Nvme$subsystem", 00:22:09.125 "trtype": "$TEST_TRANSPORT", 00:22:09.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.125 "adrfam": "ipv4", 00:22:09.125 "trsvcid": "$NVMF_PORT", 00:22:09.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.125 "hdgst": ${hdgst:-false}, 00:22:09.126 "ddgst": ${ddgst:-false} 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 } 00:22:09.126 EOF 00:22:09.126 )") 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.126 { 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme$subsystem", 00:22:09.126 "trtype": "$TEST_TRANSPORT", 00:22:09.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "$NVMF_PORT", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.126 "hdgst": ${hdgst:-false}, 00:22:09.126 "ddgst": ${ddgst:-false} 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 } 00:22:09.126 EOF 00:22:09.126 )") 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.126 { 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme$subsystem", 00:22:09.126 "trtype": "$TEST_TRANSPORT", 00:22:09.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "$NVMF_PORT", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.126 "hdgst": ${hdgst:-false}, 00:22:09.126 "ddgst": ${ddgst:-false} 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 } 00:22:09.126 EOF 00:22:09.126 )") 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.126 { 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme$subsystem", 00:22:09.126 "trtype": "$TEST_TRANSPORT", 00:22:09.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "$NVMF_PORT", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.126 "hdgst": ${hdgst:-false}, 00:22:09.126 "ddgst": ${ddgst:-false} 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 } 00:22:09.126 EOF 00:22:09.126 )") 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:09.126 [2024-10-08 18:38:03.042631] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:22:09.126 [2024-10-08 18:38:03.042686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290871 ] 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.126 { 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme$subsystem", 00:22:09.126 "trtype": "$TEST_TRANSPORT", 00:22:09.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "$NVMF_PORT", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.126 "hdgst": ${hdgst:-false}, 00:22:09.126 "ddgst": ${ddgst:-false} 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 } 00:22:09.126 EOF 00:22:09.126 )") 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.126 { 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme$subsystem", 00:22:09.126 "trtype": "$TEST_TRANSPORT", 00:22:09.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "$NVMF_PORT", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.126 "hdgst": ${hdgst:-false}, 00:22:09.126 "ddgst": ${ddgst:-false} 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 } 00:22:09.126 EOF 00:22:09.126 )") 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.126 { 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme$subsystem", 00:22:09.126 "trtype": "$TEST_TRANSPORT", 00:22:09.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "$NVMF_PORT", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.126 "hdgst": ${hdgst:-false}, 00:22:09.126 "ddgst": ${ddgst:-false} 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 } 00:22:09.126 EOF 00:22:09.126 )") 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.126 { 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme$subsystem", 00:22:09.126 "trtype": "$TEST_TRANSPORT", 00:22:09.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "$NVMF_PORT", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.126 "hdgst": ${hdgst:-false}, 00:22:09.126 "ddgst": ${ddgst:-false} 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 } 00:22:09.126 EOF 00:22:09.126 )") 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:09.126 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme1", 00:22:09.126 "trtype": "tcp", 00:22:09.126 "traddr": "10.0.0.2", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "4420", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.126 "hdgst": false, 00:22:09.126 "ddgst": false 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 },{ 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme2", 00:22:09.126 "trtype": "tcp", 00:22:09.126 "traddr": "10.0.0.2", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "4420", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:09.126 "hdgst": false, 00:22:09.126 "ddgst": false 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 },{ 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme3", 00:22:09.126 "trtype": "tcp", 00:22:09.126 "traddr": "10.0.0.2", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "4420", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:09.126 "hdgst": false, 00:22:09.126 "ddgst": false 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 },{ 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme4", 00:22:09.126 "trtype": "tcp", 00:22:09.126 "traddr": "10.0.0.2", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "4420", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:09.126 "hdgst": false, 00:22:09.126 "ddgst": false 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 },{ 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme5", 00:22:09.126 "trtype": "tcp", 00:22:09.126 "traddr": "10.0.0.2", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "4420", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:09.126 "hdgst": false, 00:22:09.126 "ddgst": false 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 },{ 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme6", 00:22:09.126 "trtype": "tcp", 00:22:09.126 "traddr": "10.0.0.2", 00:22:09.126 "adrfam": "ipv4", 00:22:09.126 "trsvcid": "4420", 00:22:09.126 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:09.126 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:09.126 "hdgst": false, 00:22:09.126 "ddgst": false 00:22:09.126 }, 00:22:09.126 "method": "bdev_nvme_attach_controller" 00:22:09.126 },{ 00:22:09.126 "params": { 00:22:09.126 "name": "Nvme7", 00:22:09.127 "trtype": "tcp", 00:22:09.127 "traddr": "10.0.0.2", 00:22:09.127 "adrfam": "ipv4", 00:22:09.127 "trsvcid": "4420", 00:22:09.127 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:09.127 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:09.127 "hdgst": false, 00:22:09.127 "ddgst": false 00:22:09.127 }, 00:22:09.127 "method": "bdev_nvme_attach_controller" 00:22:09.127 },{ 00:22:09.127 "params": { 00:22:09.127 "name": "Nvme8", 00:22:09.127 "trtype": "tcp", 00:22:09.127 "traddr": "10.0.0.2", 00:22:09.127 "adrfam": "ipv4", 00:22:09.127 "trsvcid": "4420", 00:22:09.127 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:09.127 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:09.127 "hdgst": false, 00:22:09.127 "ddgst": false 00:22:09.127 }, 00:22:09.127 "method": "bdev_nvme_attach_controller" 00:22:09.127 },{ 00:22:09.127 "params": { 00:22:09.127 "name": "Nvme9", 00:22:09.127 "trtype": "tcp", 00:22:09.127 "traddr": "10.0.0.2", 00:22:09.127 "adrfam": "ipv4", 00:22:09.127 "trsvcid": "4420", 00:22:09.127 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:09.127 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:09.127 "hdgst": false, 00:22:09.127 "ddgst": false 00:22:09.127 }, 00:22:09.127 "method": "bdev_nvme_attach_controller" 00:22:09.127 },{ 00:22:09.127 "params": { 00:22:09.127 "name": "Nvme10", 00:22:09.127 "trtype": "tcp", 00:22:09.127 "traddr": "10.0.0.2", 00:22:09.127 "adrfam": "ipv4", 00:22:09.127 "trsvcid": "4420", 00:22:09.127 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:09.127 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:09.127 "hdgst": false, 00:22:09.127 "ddgst": false 00:22:09.127 }, 00:22:09.127 "method": "bdev_nvme_attach_controller" 00:22:09.127 }' 00:22:09.127 [2024-10-08 18:38:03.124299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.387 [2024-10-08 18:38:03.188479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.769 Running I/O for 1 seconds... 00:22:11.709 1865.00 IOPS, 116.56 MiB/s 00:22:11.709 Latency(us) 00:22:11.709 [2024-10-08T16:38:05.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.709 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.709 Verification LBA range: start 0x0 length 0x400 00:22:11.709 Nvme1n1 : 1.13 230.52 14.41 0.00 0.00 267785.56 19223.89 234181.97 00:22:11.709 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.709 Verification LBA range: start 0x0 length 0x400 00:22:11.709 Nvme2n1 : 1.11 231.60 14.47 0.00 0.00 268816.85 14199.47 269134.51 00:22:11.709 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.709 Verification LBA range: start 0x0 length 0x400 00:22:11.709 Nvme3n1 : 1.17 272.47 17.03 0.00 0.00 225058.82 14417.92 256901.12 00:22:11.709 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.709 Verification LBA range: start 0x0 length 0x400 00:22:11.709 Nvme4n1 : 1.07 238.26 14.89 0.00 0.00 251666.13 16493.23 256901.12 00:22:11.709 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.709 Verification LBA range: start 0x0 length 0x400 00:22:11.709 Nvme5n1 : 1.14 229.75 14.36 0.00 0.00 256720.67 3181.23 230686.72 00:22:11.709 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.709 Verification LBA range: start 0x0 length 0x400 00:22:11.709 Nvme6n1 : 1.14 223.84 13.99 0.00 0.00 259445.12 17585.49 262144.00 00:22:11.709 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.709 Verification LBA range: start 0x0 length 0x400 00:22:11.709 Nvme7n1 : 1.19 269.24 16.83 0.00 0.00 212681.39 16493.23 225443.84 00:22:11.709 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.709 Verification LBA range: start 0x0 length 0x400 00:22:11.709 Nvme8n1 : 1.18 271.15 16.95 0.00 0.00 206874.45 14090.24 219327.15 00:22:11.709 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.709 Verification LBA range: start 0x0 length 0x400 00:22:11.709 Nvme9n1 : 1.18 222.27 13.89 0.00 0.00 246720.89 3249.49 262144.00 00:22:11.709 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.709 Verification LBA range: start 0x0 length 0x400 00:22:11.709 Nvme10n1 : 1.20 266.76 16.67 0.00 0.00 203806.63 11578.03 248162.99 00:22:11.709 [2024-10-08T16:38:05.766Z] =================================================================================================================== 00:22:11.709 [2024-10-08T16:38:05.766Z] Total : 2455.85 153.49 0.00 0.00 237522.30 3181.23 269134.51 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:11.970 rmmod nvme_tcp 00:22:11.970 rmmod nvme_fabrics 00:22:11.970 rmmod nvme_keyring 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1290037 ']' 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1290037 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1290037 ']' 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1290037 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:11.970 18:38:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1290037 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1290037' 00:22:12.232 killing process with pid 1290037 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1290037 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1290037 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.232 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:12.492 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.492 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.492 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:14.439 00:22:14.439 real 0m17.441s 00:22:14.439 user 0m35.368s 00:22:14.439 sys 0m7.258s 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.439 ************************************ 00:22:14.439 END TEST nvmf_shutdown_tc1 00:22:14.439 ************************************ 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:14.439 ************************************ 00:22:14.439 START TEST nvmf_shutdown_tc2 00:22:14.439 ************************************ 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:14.439 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:14.439 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:14.439 Found net devices under 0000:31:00.0: cvl_0_0 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:14.439 Found net devices under 0000:31:00.1: cvl_0_1 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.439 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.700 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.700 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.700 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:14.700 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.700 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.700 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.700 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:14.700 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:14.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:22:14.700 00:22:14.700 --- 10.0.0.2 ping statistics --- 00:22:14.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.700 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:22:14.961 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:22:14.961 00:22:14.961 --- 10.0.0.1 ping statistics --- 00:22:14.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.961 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:22:14.961 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.961 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:14.961 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:14.961 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.961 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1291994 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1291994 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1291994 ']' 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:14.962 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.962 [2024-10-08 18:38:08.883206] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:22:14.962 [2024-10-08 18:38:08.883270] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.962 [2024-10-08 18:38:08.972545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.222 [2024-10-08 18:38:09.029400] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.222 [2024-10-08 18:38:09.029436] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.222 [2024-10-08 18:38:09.029445] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.222 [2024-10-08 18:38:09.029450] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.222 [2024-10-08 18:38:09.029454] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.222 [2024-10-08 18:38:09.030730] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.222 [2024-10-08 18:38:09.030881] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.222 [2024-10-08 18:38:09.031031] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.222 [2024-10-08 18:38:09.031033] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.793 [2024-10-08 18:38:09.727506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.793 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.793 Malloc1 00:22:15.793 [2024-10-08 18:38:09.826259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.793 Malloc2 00:22:16.054 Malloc3 00:22:16.054 Malloc4 00:22:16.054 Malloc5 00:22:16.054 Malloc6 00:22:16.054 Malloc7 00:22:16.054 Malloc8 00:22:16.315 Malloc9 00:22:16.315 Malloc10 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1292369 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1292369 /var/tmp/bdevperf.sock 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1292369 ']' 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.315 { 00:22:16.315 "params": { 00:22:16.315 "name": "Nvme$subsystem", 00:22:16.315 "trtype": "$TEST_TRANSPORT", 00:22:16.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.315 "adrfam": "ipv4", 00:22:16.315 "trsvcid": "$NVMF_PORT", 00:22:16.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.315 "hdgst": ${hdgst:-false}, 00:22:16.315 "ddgst": ${ddgst:-false} 00:22:16.315 }, 00:22:16.315 "method": "bdev_nvme_attach_controller" 00:22:16.315 } 00:22:16.315 EOF 00:22:16.315 )") 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.315 { 00:22:16.315 "params": { 00:22:16.315 "name": "Nvme$subsystem", 00:22:16.315 "trtype": "$TEST_TRANSPORT", 00:22:16.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.315 "adrfam": "ipv4", 00:22:16.315 "trsvcid": "$NVMF_PORT", 00:22:16.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.315 "hdgst": ${hdgst:-false}, 00:22:16.315 "ddgst": ${ddgst:-false} 00:22:16.315 }, 00:22:16.315 "method": "bdev_nvme_attach_controller" 00:22:16.315 } 00:22:16.315 EOF 00:22:16.315 )") 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.315 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.315 { 00:22:16.315 "params": { 00:22:16.315 "name": "Nvme$subsystem", 00:22:16.315 "trtype": "$TEST_TRANSPORT", 00:22:16.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.315 "adrfam": "ipv4", 00:22:16.315 "trsvcid": "$NVMF_PORT", 00:22:16.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.316 "hdgst": ${hdgst:-false}, 00:22:16.316 "ddgst": ${ddgst:-false} 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 } 00:22:16.316 EOF 00:22:16.316 )") 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.316 { 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme$subsystem", 00:22:16.316 "trtype": "$TEST_TRANSPORT", 00:22:16.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "$NVMF_PORT", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.316 "hdgst": ${hdgst:-false}, 00:22:16.316 "ddgst": ${ddgst:-false} 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 } 00:22:16.316 EOF 00:22:16.316 )") 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.316 { 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme$subsystem", 00:22:16.316 "trtype": "$TEST_TRANSPORT", 00:22:16.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "$NVMF_PORT", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.316 "hdgst": ${hdgst:-false}, 00:22:16.316 "ddgst": ${ddgst:-false} 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 } 00:22:16.316 EOF 00:22:16.316 )") 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.316 { 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme$subsystem", 00:22:16.316 "trtype": "$TEST_TRANSPORT", 00:22:16.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "$NVMF_PORT", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.316 "hdgst": ${hdgst:-false}, 00:22:16.316 "ddgst": ${ddgst:-false} 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 } 00:22:16.316 EOF 00:22:16.316 )") 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.316 [2024-10-08 18:38:10.276296] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:22:16.316 [2024-10-08 18:38:10.276349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292369 ] 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.316 { 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme$subsystem", 00:22:16.316 "trtype": "$TEST_TRANSPORT", 00:22:16.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "$NVMF_PORT", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.316 "hdgst": ${hdgst:-false}, 00:22:16.316 "ddgst": ${ddgst:-false} 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 } 00:22:16.316 EOF 00:22:16.316 )") 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.316 { 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme$subsystem", 00:22:16.316 "trtype": "$TEST_TRANSPORT", 00:22:16.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "$NVMF_PORT", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.316 "hdgst": ${hdgst:-false}, 00:22:16.316 "ddgst": ${ddgst:-false} 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 } 00:22:16.316 EOF 00:22:16.316 )") 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.316 { 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme$subsystem", 00:22:16.316 "trtype": "$TEST_TRANSPORT", 00:22:16.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "$NVMF_PORT", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.316 "hdgst": ${hdgst:-false}, 00:22:16.316 "ddgst": ${ddgst:-false} 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 } 00:22:16.316 EOF 00:22:16.316 )") 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:16.316 { 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme$subsystem", 00:22:16.316 "trtype": "$TEST_TRANSPORT", 00:22:16.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "$NVMF_PORT", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.316 "hdgst": ${hdgst:-false}, 00:22:16.316 "ddgst": ${ddgst:-false} 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 } 00:22:16.316 EOF 00:22:16.316 )") 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:16.316 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme1", 00:22:16.316 "trtype": "tcp", 00:22:16.316 "traddr": "10.0.0.2", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "4420", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.316 "hdgst": false, 00:22:16.316 "ddgst": false 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 },{ 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme2", 00:22:16.316 "trtype": "tcp", 00:22:16.316 "traddr": "10.0.0.2", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "4420", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:16.316 "hdgst": false, 00:22:16.316 "ddgst": false 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 },{ 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme3", 00:22:16.316 "trtype": "tcp", 00:22:16.316 "traddr": "10.0.0.2", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "4420", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:16.316 "hdgst": false, 00:22:16.316 "ddgst": false 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 },{ 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme4", 00:22:16.316 "trtype": "tcp", 00:22:16.316 "traddr": "10.0.0.2", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "4420", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:16.316 "hdgst": false, 00:22:16.316 "ddgst": false 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 },{ 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme5", 00:22:16.316 "trtype": "tcp", 00:22:16.316 "traddr": "10.0.0.2", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "4420", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:16.316 "hdgst": false, 00:22:16.316 "ddgst": false 00:22:16.316 }, 00:22:16.316 "method": "bdev_nvme_attach_controller" 00:22:16.316 },{ 00:22:16.316 "params": { 00:22:16.316 "name": "Nvme6", 00:22:16.316 "trtype": "tcp", 00:22:16.316 "traddr": "10.0.0.2", 00:22:16.316 "adrfam": "ipv4", 00:22:16.316 "trsvcid": "4420", 00:22:16.316 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:16.316 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:16.316 "hdgst": false, 00:22:16.316 "ddgst": false 00:22:16.317 }, 00:22:16.317 "method": "bdev_nvme_attach_controller" 00:22:16.317 },{ 00:22:16.317 "params": { 00:22:16.317 "name": "Nvme7", 00:22:16.317 "trtype": "tcp", 00:22:16.317 "traddr": "10.0.0.2", 00:22:16.317 "adrfam": "ipv4", 00:22:16.317 "trsvcid": "4420", 00:22:16.317 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:16.317 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:16.317 "hdgst": false, 00:22:16.317 "ddgst": false 00:22:16.317 }, 00:22:16.317 "method": "bdev_nvme_attach_controller" 00:22:16.317 },{ 00:22:16.317 "params": { 00:22:16.317 "name": "Nvme8", 00:22:16.317 "trtype": "tcp", 00:22:16.317 "traddr": "10.0.0.2", 00:22:16.317 "adrfam": "ipv4", 00:22:16.317 "trsvcid": "4420", 00:22:16.317 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:16.317 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:16.317 "hdgst": false, 00:22:16.317 "ddgst": false 00:22:16.317 }, 00:22:16.317 "method": "bdev_nvme_attach_controller" 00:22:16.317 },{ 00:22:16.317 "params": { 00:22:16.317 "name": "Nvme9", 00:22:16.317 "trtype": "tcp", 00:22:16.317 "traddr": "10.0.0.2", 00:22:16.317 "adrfam": "ipv4", 00:22:16.317 "trsvcid": "4420", 00:22:16.317 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:16.317 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:16.317 "hdgst": false, 00:22:16.317 "ddgst": false 00:22:16.317 }, 00:22:16.317 "method": "bdev_nvme_attach_controller" 00:22:16.317 },{ 00:22:16.317 "params": { 00:22:16.317 "name": "Nvme10", 00:22:16.317 "trtype": "tcp", 00:22:16.317 "traddr": "10.0.0.2", 00:22:16.317 "adrfam": "ipv4", 00:22:16.317 "trsvcid": "4420", 00:22:16.317 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:16.317 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:16.317 "hdgst": false, 00:22:16.317 "ddgst": false 00:22:16.317 }, 00:22:16.317 "method": "bdev_nvme_attach_controller" 00:22:16.317 }' 00:22:16.317 [2024-10-08 18:38:10.354190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.604 [2024-10-08 18:38:10.419089] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.016 Running I/O for 10 seconds... 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:18.016 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:18.277 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:18.277 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:18.277 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:18.277 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:18.277 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.277 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.277 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.277 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:18.277 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:18.277 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1292369 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1292369 ']' 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1292369 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1292369 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:18.538 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1292369' 00:22:18.538 killing process with pid 1292369 00:22:18.539 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1292369 00:22:18.539 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1292369 00:22:18.800 Received shutdown signal, test time was about 0.950685 seconds 00:22:18.800 00:22:18.800 Latency(us) 00:22:18.800 [2024-10-08T16:38:12.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.800 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.800 Verification LBA range: start 0x0 length 0x400 00:22:18.800 Nvme1n1 : 0.92 209.34 13.08 0.00 0.00 302133.48 16930.13 227191.47 00:22:18.800 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.800 Verification LBA range: start 0x0 length 0x400 00:22:18.800 Nvme2n1 : 0.92 208.27 13.02 0.00 0.00 297360.50 17913.17 263891.63 00:22:18.800 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.800 Verification LBA range: start 0x0 length 0x400 00:22:18.800 Nvme3n1 : 0.95 270.88 16.93 0.00 0.00 223976.21 13161.81 248162.99 00:22:18.800 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.800 Verification LBA range: start 0x0 length 0x400 00:22:18.800 Nvme4n1 : 0.93 273.90 17.12 0.00 0.00 216640.43 21299.20 242920.11 00:22:18.800 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.800 Verification LBA range: start 0x0 length 0x400 00:22:18.800 Nvme5n1 : 0.95 269.53 16.85 0.00 0.00 215352.53 14636.37 251658.24 00:22:18.800 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.800 Verification LBA range: start 0x0 length 0x400 00:22:18.800 Nvme6n1 : 0.92 208.79 13.05 0.00 0.00 271402.10 45001.39 223696.21 00:22:18.800 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.800 Verification LBA range: start 0x0 length 0x400 00:22:18.800 Nvme7n1 : 0.94 275.98 17.25 0.00 0.00 200883.28 2129.92 234181.97 00:22:18.800 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.800 Verification LBA range: start 0x0 length 0x400 00:22:18.800 Nvme8n1 : 0.94 276.94 17.31 0.00 0.00 195532.64 2211.84 244667.73 00:22:18.800 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.800 Verification LBA range: start 0x0 length 0x400 00:22:18.800 Nvme9n1 : 0.93 206.63 12.91 0.00 0.00 255594.95 23374.51 255153.49 00:22:18.800 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.800 Verification LBA range: start 0x0 length 0x400 00:22:18.800 Nvme10n1 : 0.93 206.11 12.88 0.00 0.00 250030.65 19005.44 270882.13 00:22:18.800 [2024-10-08T16:38:12.857Z] =================================================================================================================== 00:22:18.800 [2024-10-08T16:38:12.857Z] Total : 2406.37 150.40 0.00 0.00 238117.62 2129.92 270882.13 00:22:18.800 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:19.743 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1291994 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:19.744 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:19.744 rmmod nvme_tcp 00:22:20.004 rmmod nvme_fabrics 00:22:20.004 rmmod nvme_keyring 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1291994 ']' 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1291994 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1291994 ']' 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1291994 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1291994 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1291994' 00:22:20.004 killing process with pid 1291994 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1291994 00:22:20.004 18:38:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1291994 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.266 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.182 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:22.183 00:22:22.183 real 0m7.775s 00:22:22.183 user 0m23.108s 00:22:22.183 sys 0m1.291s 00:22:22.183 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:22.183 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.183 ************************************ 00:22:22.183 END TEST nvmf_shutdown_tc2 00:22:22.183 ************************************ 00:22:22.444 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:22.444 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:22.444 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:22.445 ************************************ 00:22:22.445 START TEST nvmf_shutdown_tc3 00:22:22.445 ************************************ 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:22.445 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:22.445 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:22.445 Found net devices under 0000:31:00.0: cvl_0_0 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:22.445 Found net devices under 0000:31:00.1: cvl_0_1 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:22.445 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.446 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:22:22.707 00:22:22.707 --- 10.0.0.2 ping statistics --- 00:22:22.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.707 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:22:22.707 00:22:22.707 --- 10.0.0.1 ping statistics --- 00:22:22.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.707 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:22.707 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1293841 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1293841 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1293841 ']' 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:22.708 18:38:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.968 [2024-10-08 18:38:16.769439] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:22:22.968 [2024-10-08 18:38:16.769507] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.968 [2024-10-08 18:38:16.857864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.968 [2024-10-08 18:38:16.917836] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.968 [2024-10-08 18:38:16.917881] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.968 [2024-10-08 18:38:16.917891] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.968 [2024-10-08 18:38:16.917896] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.968 [2024-10-08 18:38:16.917900] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.968 [2024-10-08 18:38:16.919447] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.968 [2024-10-08 18:38:16.919600] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.968 [2024-10-08 18:38:16.919755] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.969 [2024-10-08 18:38:16.919757] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:23.540 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:23.540 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:23.540 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:23.540 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.540 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.801 [2024-10-08 18:38:17.618303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.801 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.801 Malloc1 00:22:23.801 [2024-10-08 18:38:17.717065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.801 Malloc2 00:22:23.801 Malloc3 00:22:23.801 Malloc4 00:22:23.801 Malloc5 00:22:24.063 Malloc6 00:22:24.063 Malloc7 00:22:24.063 Malloc8 00:22:24.063 Malloc9 00:22:24.063 Malloc10 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1294074 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1294074 /var/tmp/bdevperf.sock 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1294074 ']' 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:24.063 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.064 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:24.064 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:24.064 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.064 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.064 { 00:22:24.064 "params": { 00:22:24.064 "name": "Nvme$subsystem", 00:22:24.064 "trtype": "$TEST_TRANSPORT", 00:22:24.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.064 "adrfam": "ipv4", 00:22:24.064 "trsvcid": "$NVMF_PORT", 00:22:24.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.064 "hdgst": ${hdgst:-false}, 00:22:24.064 "ddgst": ${ddgst:-false} 00:22:24.064 }, 00:22:24.064 "method": "bdev_nvme_attach_controller" 00:22:24.064 } 00:22:24.064 EOF 00:22:24.064 )") 00:22:24.064 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.326 { 00:22:24.326 "params": { 00:22:24.326 "name": "Nvme$subsystem", 00:22:24.326 "trtype": "$TEST_TRANSPORT", 00:22:24.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.326 "adrfam": "ipv4", 00:22:24.326 "trsvcid": "$NVMF_PORT", 00:22:24.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.326 "hdgst": ${hdgst:-false}, 00:22:24.326 "ddgst": ${ddgst:-false} 00:22:24.326 }, 00:22:24.326 "method": "bdev_nvme_attach_controller" 00:22:24.326 } 00:22:24.326 EOF 00:22:24.326 )") 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.326 { 00:22:24.326 "params": { 00:22:24.326 "name": "Nvme$subsystem", 00:22:24.326 "trtype": "$TEST_TRANSPORT", 00:22:24.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.326 "adrfam": "ipv4", 00:22:24.326 "trsvcid": "$NVMF_PORT", 00:22:24.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.326 "hdgst": ${hdgst:-false}, 00:22:24.326 "ddgst": ${ddgst:-false} 00:22:24.326 }, 00:22:24.326 "method": "bdev_nvme_attach_controller" 00:22:24.326 } 00:22:24.326 EOF 00:22:24.326 )") 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.326 { 00:22:24.326 "params": { 00:22:24.326 "name": "Nvme$subsystem", 00:22:24.326 "trtype": "$TEST_TRANSPORT", 00:22:24.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.326 "adrfam": "ipv4", 00:22:24.326 "trsvcid": "$NVMF_PORT", 00:22:24.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.326 "hdgst": ${hdgst:-false}, 00:22:24.326 "ddgst": ${ddgst:-false} 00:22:24.326 }, 00:22:24.326 "method": "bdev_nvme_attach_controller" 00:22:24.326 } 00:22:24.326 EOF 00:22:24.326 )") 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.326 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.326 { 00:22:24.326 "params": { 00:22:24.326 "name": "Nvme$subsystem", 00:22:24.326 "trtype": "$TEST_TRANSPORT", 00:22:24.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.326 "adrfam": "ipv4", 00:22:24.326 "trsvcid": "$NVMF_PORT", 00:22:24.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.327 "hdgst": ${hdgst:-false}, 00:22:24.327 "ddgst": ${ddgst:-false} 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 } 00:22:24.327 EOF 00:22:24.327 )") 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.327 { 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme$subsystem", 00:22:24.327 "trtype": "$TEST_TRANSPORT", 00:22:24.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "$NVMF_PORT", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.327 "hdgst": ${hdgst:-false}, 00:22:24.327 "ddgst": ${ddgst:-false} 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 } 00:22:24.327 EOF 00:22:24.327 )") 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.327 [2024-10-08 18:38:18.162703] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:22:24.327 [2024-10-08 18:38:18.162757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294074 ] 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.327 { 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme$subsystem", 00:22:24.327 "trtype": "$TEST_TRANSPORT", 00:22:24.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "$NVMF_PORT", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.327 "hdgst": ${hdgst:-false}, 00:22:24.327 "ddgst": ${ddgst:-false} 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 } 00:22:24.327 EOF 00:22:24.327 )") 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.327 { 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme$subsystem", 00:22:24.327 "trtype": "$TEST_TRANSPORT", 00:22:24.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "$NVMF_PORT", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.327 "hdgst": ${hdgst:-false}, 00:22:24.327 "ddgst": ${ddgst:-false} 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 } 00:22:24.327 EOF 00:22:24.327 )") 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.327 { 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme$subsystem", 00:22:24.327 "trtype": "$TEST_TRANSPORT", 00:22:24.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "$NVMF_PORT", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.327 "hdgst": ${hdgst:-false}, 00:22:24.327 "ddgst": ${ddgst:-false} 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 } 00:22:24.327 EOF 00:22:24.327 )") 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:24.327 { 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme$subsystem", 00:22:24.327 "trtype": "$TEST_TRANSPORT", 00:22:24.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "$NVMF_PORT", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.327 "hdgst": ${hdgst:-false}, 00:22:24.327 "ddgst": ${ddgst:-false} 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 } 00:22:24.327 EOF 00:22:24.327 )") 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:24.327 18:38:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme1", 00:22:24.327 "trtype": "tcp", 00:22:24.327 "traddr": "10.0.0.2", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "4420", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.327 "hdgst": false, 00:22:24.327 "ddgst": false 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 },{ 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme2", 00:22:24.327 "trtype": "tcp", 00:22:24.327 "traddr": "10.0.0.2", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "4420", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:24.327 "hdgst": false, 00:22:24.327 "ddgst": false 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 },{ 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme3", 00:22:24.327 "trtype": "tcp", 00:22:24.327 "traddr": "10.0.0.2", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "4420", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:24.327 "hdgst": false, 00:22:24.327 "ddgst": false 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 },{ 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme4", 00:22:24.327 "trtype": "tcp", 00:22:24.327 "traddr": "10.0.0.2", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "4420", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:24.327 "hdgst": false, 00:22:24.327 "ddgst": false 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 },{ 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme5", 00:22:24.327 "trtype": "tcp", 00:22:24.327 "traddr": "10.0.0.2", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "4420", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:24.327 "hdgst": false, 00:22:24.327 "ddgst": false 00:22:24.327 }, 00:22:24.327 "method": "bdev_nvme_attach_controller" 00:22:24.327 },{ 00:22:24.327 "params": { 00:22:24.327 "name": "Nvme6", 00:22:24.327 "trtype": "tcp", 00:22:24.327 "traddr": "10.0.0.2", 00:22:24.327 "adrfam": "ipv4", 00:22:24.327 "trsvcid": "4420", 00:22:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:24.327 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:24.327 "hdgst": false, 00:22:24.327 "ddgst": false 00:22:24.328 }, 00:22:24.328 "method": "bdev_nvme_attach_controller" 00:22:24.328 },{ 00:22:24.328 "params": { 00:22:24.328 "name": "Nvme7", 00:22:24.328 "trtype": "tcp", 00:22:24.328 "traddr": "10.0.0.2", 00:22:24.328 "adrfam": "ipv4", 00:22:24.328 "trsvcid": "4420", 00:22:24.328 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:24.328 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:24.328 "hdgst": false, 00:22:24.328 "ddgst": false 00:22:24.328 }, 00:22:24.328 "method": "bdev_nvme_attach_controller" 00:22:24.328 },{ 00:22:24.328 "params": { 00:22:24.328 "name": "Nvme8", 00:22:24.328 "trtype": "tcp", 00:22:24.328 "traddr": "10.0.0.2", 00:22:24.328 "adrfam": "ipv4", 00:22:24.328 "trsvcid": "4420", 00:22:24.328 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:24.328 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:24.328 "hdgst": false, 00:22:24.328 "ddgst": false 00:22:24.328 }, 00:22:24.328 "method": "bdev_nvme_attach_controller" 00:22:24.328 },{ 00:22:24.328 "params": { 00:22:24.328 "name": "Nvme9", 00:22:24.328 "trtype": "tcp", 00:22:24.328 "traddr": "10.0.0.2", 00:22:24.328 "adrfam": "ipv4", 00:22:24.328 "trsvcid": "4420", 00:22:24.328 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:24.328 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:24.328 "hdgst": false, 00:22:24.328 "ddgst": false 00:22:24.328 }, 00:22:24.328 "method": "bdev_nvme_attach_controller" 00:22:24.328 },{ 00:22:24.328 "params": { 00:22:24.328 "name": "Nvme10", 00:22:24.328 "trtype": "tcp", 00:22:24.328 "traddr": "10.0.0.2", 00:22:24.328 "adrfam": "ipv4", 00:22:24.328 "trsvcid": "4420", 00:22:24.328 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:24.328 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:24.328 "hdgst": false, 00:22:24.328 "ddgst": false 00:22:24.328 }, 00:22:24.328 "method": "bdev_nvme_attach_controller" 00:22:24.328 }' 00:22:24.328 [2024-10-08 18:38:18.245111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.328 [2024-10-08 18:38:18.310172] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.711 Running I/O for 10 seconds... 00:22:25.711 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.711 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:25.711 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:25.711 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.711 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:25.971 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:26.233 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:26.233 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:26.233 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:26.233 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:26.233 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.233 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.233 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.233 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:26.233 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:26.233 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:26.493 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:26.493 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:26.493 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:26.493 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:26.493 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.493 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1293841 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1293841 ']' 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1293841 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1293841 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1293841' 00:22:26.773 killing process with pid 1293841 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1293841 00:22:26.773 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1293841 00:22:26.773 [2024-10-08 18:38:20.654688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.773 [2024-10-08 18:38:20.654899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.654997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.655001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.655006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.655010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.655015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.655019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.655024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.655028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.655033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34010 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.656018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36bc0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.656045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36bc0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.656051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36bc0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.656055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36bc0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.656060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36bc0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.656065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36bc0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.656070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36bc0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.656074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36bc0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.656986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.656998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.774 [2024-10-08 18:38:20.657173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.657292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a344e0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.775 [2024-10-08 18:38:20.658960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.658965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.658969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.658978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.658983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.658988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.658992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.658997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.659002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.659007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.659011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a349b0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.660424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a34ea0 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.661051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.661072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.661077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.661082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.661086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.661091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.776 [2024-10-08 18:38:20.661096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.661358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35370 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.777 [2024-10-08 18:38:20.662215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.662402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35860 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.778 [2024-10-08 18:38:20.663267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.663380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.672751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.672774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.672783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.672790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.672796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.672802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.672808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.672814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.672819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.672825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d30 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.779 [2024-10-08 18:38:20.673737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.673816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a36200 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.676129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e97610 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.676256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7f340 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.676343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81030 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.676433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f762c0 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.676518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23abd30 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.676611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77b00 is same with the state(6) to be set 00:22:26.780 [2024-10-08 18:38:20.676695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.780 [2024-10-08 18:38:20.676733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.780 [2024-10-08 18:38:20.676741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dee10 is same with the state(6) to be set 00:22:26.781 [2024-10-08 18:38:20.676781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2930 is same with the state(6) to be set 00:22:26.781 [2024-10-08 18:38:20.676870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aab10 is same with the state(6) to be set 00:22:26.781 [2024-10-08 18:38:20.676956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.676986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.676994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.677001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.781 [2024-10-08 18:38:20.677016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23abb50 is same with the state(6) to be set 00:22:26.781 [2024-10-08 18:38:20.677679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.677985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.677995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.678003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.678012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.678019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.678029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.678036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.678046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.678053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.678064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.678071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.678081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.678088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.781 [2024-10-08 18:38:20.678098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-10-08 18:38:20.678105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.782 [2024-10-08 18:38:20.678661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-10-08 18:38:20.678668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.678678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.678685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.678694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.678702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.678711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.678718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.678728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.678736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.678746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.678753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.678762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.678770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.678779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.678787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.678796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.678803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679126] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23862e0 was disconnected and freed. reset controller. 00:22:26.783 [2024-10-08 18:38:20.679482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.679988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.679995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.680005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.783 [2024-10-08 18:38:20.680012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.783 [2024-10-08 18:38:20.680021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.784 [2024-10-08 18:38:20.680599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.784 [2024-10-08 18:38:20.680649] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2384da0 was disconnected and freed. reset controller. 00:22:26.784 [2024-10-08 18:38:20.683386] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:26.784 [2024-10-08 18:38:20.683415] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:26.784 [2024-10-08 18:38:20.683431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dee10 (9): Bad file descriptor 00:22:26.784 [2024-10-08 18:38:20.683445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23abd30 (9): Bad file descriptor 00:22:26.784 [2024-10-08 18:38:20.684080] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.784 [2024-10-08 18:38:20.684128] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.785 [2024-10-08 18:38:20.684171] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.785 [2024-10-08 18:38:20.684205] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.785 [2024-10-08 18:38:20.684238] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.785 [2024-10-08 18:38:20.684274] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.785 [2024-10-08 18:38:20.684310] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.785 [2024-10-08 18:38:20.684344] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:26.785 [2024-10-08 18:38:20.684847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.785 [2024-10-08 18:38:20.684864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23abd30 with addr=10.0.0.2, port=4420 00:22:26.785 [2024-10-08 18:38:20.684872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23abd30 is same with the state(6) to be set 00:22:26.785 [2024-10-08 18:38:20.685198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.785 [2024-10-08 18:38:20.685209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dee10 with addr=10.0.0.2, port=4420 00:22:26.785 [2024-10-08 18:38:20.685216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dee10 is same with the state(6) to be set 00:22:26.785 [2024-10-08 18:38:20.685303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23abd30 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.685315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dee10 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.685358] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:26.785 [2024-10-08 18:38:20.685366] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:26.785 [2024-10-08 18:38:20.685374] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:26.785 [2024-10-08 18:38:20.685388] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:26.785 [2024-10-08 18:38:20.685395] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:26.785 [2024-10-08 18:38:20.685402] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:26.785 [2024-10-08 18:38:20.685446] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.785 [2024-10-08 18:38:20.685455] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.785 [2024-10-08 18:38:20.686120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e97610 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.686142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7f340 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.686163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f81030 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.686179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f762c0 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.686198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77b00 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.686214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d2930 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.686232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23aab10 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.686250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23abb50 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.694182] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:26.785 [2024-10-08 18:38:20.694202] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:26.785 [2024-10-08 18:38:20.694579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.785 [2024-10-08 18:38:20.694594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dee10 with addr=10.0.0.2, port=4420 00:22:26.785 [2024-10-08 18:38:20.694602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dee10 is same with the state(6) to be set 00:22:26.785 [2024-10-08 18:38:20.694938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.785 [2024-10-08 18:38:20.694948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23abd30 with addr=10.0.0.2, port=4420 00:22:26.785 [2024-10-08 18:38:20.694956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23abd30 is same with the state(6) to be set 00:22:26.785 [2024-10-08 18:38:20.695003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dee10 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.695013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23abd30 (9): Bad file descriptor 00:22:26.785 [2024-10-08 18:38:20.695054] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:26.785 [2024-10-08 18:38:20.695062] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:26.785 [2024-10-08 18:38:20.695069] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:26.785 [2024-10-08 18:38:20.695082] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:26.785 [2024-10-08 18:38:20.695088] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:26.785 [2024-10-08 18:38:20.695095] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:26.785 [2024-10-08 18:38:20.695140] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.785 [2024-10-08 18:38:20.695148] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.785 [2024-10-08 18:38:20.696291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.785 [2024-10-08 18:38:20.696508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.785 [2024-10-08 18:38:20.696515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.696990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.696999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.697006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.697016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.697023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.697033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.697040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.697049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.697057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.697066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.697074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.697083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.697091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.697100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.697108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.697117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.697125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.697135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.697142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.786 [2024-10-08 18:38:20.697152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.786 [2024-10-08 18:38:20.697160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.697382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.697391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21850e0 is same with the state(6) to be set 00:22:26.787 [2024-10-08 18:38:20.698673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.698986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.698996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.699003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.699012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.787 [2024-10-08 18:38:20.699020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.787 [2024-10-08 18:38:20.699029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.788 [2024-10-08 18:38:20.699644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.788 [2024-10-08 18:38:20.699653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.699660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.699670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.699677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.699686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.699693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.699703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.699710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.699719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.699726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.699736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.699743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.699753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.699760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.699771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.699778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.699786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21862d0 is same with the state(6) to be set 00:22:26.789 [2024-10-08 18:38:20.701056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.789 [2024-10-08 18:38:20.701549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.789 [2024-10-08 18:38:20.701555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.701988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.701996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.702005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.702012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.702021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.702028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.702038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.702045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.702055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.702063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.702072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.702079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.702090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.702098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.702108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.702115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.702124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.702131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.702139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246bae0 is same with the state(6) to be set 00:22:26.790 [2024-10-08 18:38:20.703413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.703426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.703439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.703448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.703460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.703469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.790 [2024-10-08 18:38:20.703480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.790 [2024-10-08 18:38:20.703489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.703982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.703991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.704000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.704009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.704018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.704028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.704035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.704044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.704052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.704061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.704068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.704077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.704084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.704094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.704101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.791 [2024-10-08 18:38:20.704110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.791 [2024-10-08 18:38:20.704118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.704502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.704510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246cff0 is same with the state(6) to be set 00:22:26.792 [2024-10-08 18:38:20.705771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.705783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.705796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.705805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.705816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.705825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.705836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.705845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.705856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.705865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.792 [2024-10-08 18:38:20.705875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.792 [2024-10-08 18:38:20.705883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.705892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.705899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.705909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.705921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.705931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.705938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.705947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.705954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.705964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.705971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.705986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.705994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.793 [2024-10-08 18:38:20.706513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.793 [2024-10-08 18:38:20.706522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.706864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.706872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246e490 is same with the state(6) to be set 00:22:26.794 [2024-10-08 18:38:20.708143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.794 [2024-10-08 18:38:20.708426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.794 [2024-10-08 18:38:20.708436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.708985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.708994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.709002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.709011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.709019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.709028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.709035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.709045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.709053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.709062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.795 [2024-10-08 18:38:20.709069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.795 [2024-10-08 18:38:20.709078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.709087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.709097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.709104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.709114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.709121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.709130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.709137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.709147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.709154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.709163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.709170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.709179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.709186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.709196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.709203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.709212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.709220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.709228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380e50 is same with the state(6) to be set 00:22:26.796 [2024-10-08 18:38:20.710501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.796 [2024-10-08 18:38:20.710988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.796 [2024-10-08 18:38:20.710997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.797 [2024-10-08 18:38:20.711543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.797 [2024-10-08 18:38:20.711552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.711559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.711569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.711576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.711585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.711593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.711601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2382340 is same with the state(6) to be set 00:22:26.798 [2024-10-08 18:38:20.712861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.712874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.712887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.712895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.712910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.712917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.712927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.712934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.712944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.712951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.712960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.712968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.712981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.712988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.712998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.798 [2024-10-08 18:38:20.713441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.798 [2024-10-08 18:38:20.713451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.799 [2024-10-08 18:38:20.713942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.799 [2024-10-08 18:38:20.713950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23838e0 is same with the state(6) to be set 00:22:26.799 [2024-10-08 18:38:20.715440] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.799 [2024-10-08 18:38:20.715464] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:26.799 [2024-10-08 18:38:20.715474] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:26.799 [2024-10-08 18:38:20.715484] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:26.800 [2024-10-08 18:38:20.715554] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:26.800 [2024-10-08 18:38:20.715572] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:26.800 [2024-10-08 18:38:20.715585] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:26.800 [2024-10-08 18:38:20.715595] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:26.800 [2024-10-08 18:38:20.715680] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:26.800 [2024-10-08 18:38:20.715690] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:26.800 [2024-10-08 18:38:20.715699] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:26.800 task offset: 16384 on job bdev=Nvme10n1 fails 00:22:26.800 00:22:26.800 Latency(us) 00:22:26.800 [2024-10-08T16:38:20.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.800 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.800 Job: Nvme1n1 ended in about 0.98 seconds with error 00:22:26.800 Verification LBA range: start 0x0 length 0x400 00:22:26.800 Nvme1n1 : 0.98 131.09 8.19 65.55 0.00 321912.60 22828.37 344282.45 00:22:26.800 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.800 Job: Nvme2n1 ended in about 0.98 seconds with error 00:22:26.800 Verification LBA range: start 0x0 length 0x400 00:22:26.800 Nvme2n1 : 0.98 130.77 8.17 65.39 0.00 316161.71 31020.37 328553.81 00:22:26.800 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.800 Job: Nvme3n1 ended in about 0.98 seconds with error 00:22:26.800 Verification LBA range: start 0x0 length 0x400 00:22:26.800 Nvme3n1 : 0.98 134.54 8.41 65.23 0.00 304138.03 24685.23 307582.29 00:22:26.800 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.800 Job: Nvme4n1 ended in about 0.98 seconds with error 00:22:26.800 Verification LBA range: start 0x0 length 0x400 00:22:26.800 Nvme4n1 : 0.98 130.15 8.13 65.07 0.00 304835.41 23811.41 344282.45 00:22:26.800 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.800 Job: Nvme5n1 ended in about 0.99 seconds with error 00:22:26.800 Verification LBA range: start 0x0 length 0x400 00:22:26.800 Nvme5n1 : 0.99 129.84 8.11 64.92 0.00 299120.07 29928.11 326806.19 00:22:26.800 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.800 Job: Nvme6n1 ended in about 0.99 seconds with error 00:22:26.800 Verification LBA range: start 0x0 length 0x400 00:22:26.800 Nvme6n1 : 0.99 133.58 8.35 64.76 0.00 287521.65 18022.40 307582.29 00:22:26.800 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.800 Job: Nvme7n1 ended in about 0.99 seconds with error 00:22:26.800 Verification LBA range: start 0x0 length 0x400 00:22:26.800 Nvme7n1 : 0.99 129.22 8.08 64.61 0.00 287776.43 29491.20 300591.79 00:22:26.800 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.800 Job: Nvme8n1 ended in about 0.99 seconds with error 00:22:26.800 Verification LBA range: start 0x0 length 0x400 00:22:26.800 Nvme8n1 : 0.99 128.91 8.06 64.46 0.00 282283.80 53957.97 298844.16 00:22:26.800 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.800 Job: Nvme9n1 ended in about 0.96 seconds with error 00:22:26.800 Verification LBA range: start 0x0 length 0x400 00:22:26.800 Nvme9n1 : 0.96 133.17 8.32 66.58 0.00 265284.55 4805.97 330301.44 00:22:26.800 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.800 Job: Nvme10n1 ended in about 0.96 seconds with error 00:22:26.800 Verification LBA range: start 0x0 length 0x400 00:22:26.800 Nvme10n1 : 0.96 133.38 8.34 66.69 0.00 258502.04 5106.35 349525.33 00:22:26.800 [2024-10-08T16:38:20.857Z] =================================================================================================================== 00:22:26.800 [2024-10-08T16:38:20.857Z] Total : 1314.65 82.17 653.26 0.00 292766.39 4805.97 349525.33 00:22:26.800 [2024-10-08 18:38:20.739505] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:26.800 [2024-10-08 18:38:20.739541] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:26.800 [2024-10-08 18:38:20.739987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.800 [2024-10-08 18:38:20.740004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f81030 with addr=10.0.0.2, port=4420 00:22:26.800 [2024-10-08 18:38:20.740013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81030 is same with the state(6) to be set 00:22:26.800 [2024-10-08 18:38:20.740400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.800 [2024-10-08 18:38:20.740410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77b00 with addr=10.0.0.2, port=4420 00:22:26.800 [2024-10-08 18:38:20.740417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f77b00 is same with the state(6) to be set 00:22:26.800 [2024-10-08 18:38:20.740624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.800 [2024-10-08 18:38:20.740633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f762c0 with addr=10.0.0.2, port=4420 00:22:26.800 [2024-10-08 18:38:20.740641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f762c0 is same with the state(6) to be set 00:22:26.800 [2024-10-08 18:38:20.740954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.800 [2024-10-08 18:38:20.740964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7f340 with addr=10.0.0.2, port=4420 00:22:26.800 [2024-10-08 18:38:20.740972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7f340 is same with the state(6) to be set 00:22:26.800 [2024-10-08 18:38:20.743444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.800 [2024-10-08 18:38:20.743460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23aab10 with addr=10.0.0.2, port=4420 00:22:26.800 [2024-10-08 18:38:20.743467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aab10 is same with the state(6) to be set 00:22:26.800 [2024-10-08 18:38:20.743783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.800 [2024-10-08 18:38:20.743793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e97610 with addr=10.0.0.2, port=4420 00:22:26.800 [2024-10-08 18:38:20.743800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e97610 is same with the state(6) to be set 00:22:26.800 [2024-10-08 18:38:20.744018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.800 [2024-10-08 18:38:20.744028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23abb50 with addr=10.0.0.2, port=4420 00:22:26.800 [2024-10-08 18:38:20.744035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23abb50 is same with the state(6) to be set 00:22:26.800 [2024-10-08 18:38:20.744349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.800 [2024-10-08 18:38:20.744359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d2930 with addr=10.0.0.2, port=4420 00:22:26.800 [2024-10-08 18:38:20.744371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2930 is same with the state(6) to be set 00:22:26.800 [2024-10-08 18:38:20.744385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f81030 (9): Bad file descriptor 00:22:26.800 [2024-10-08 18:38:20.744395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77b00 (9): Bad file descriptor 00:22:26.800 [2024-10-08 18:38:20.744405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f762c0 (9): Bad file descriptor 00:22:26.800 [2024-10-08 18:38:20.744414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7f340 (9): Bad file descriptor 00:22:26.800 [2024-10-08 18:38:20.744441] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:26.800 [2024-10-08 18:38:20.744452] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:26.800 [2024-10-08 18:38:20.744468] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:26.800 [2024-10-08 18:38:20.744478] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:26.800 [2024-10-08 18:38:20.744490] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:26.800 [2024-10-08 18:38:20.744501] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:26.800 [2024-10-08 18:38:20.744573] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:26.800 [2024-10-08 18:38:20.744584] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:26.800 [2024-10-08 18:38:20.744620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23aab10 (9): Bad file descriptor 00:22:26.800 [2024-10-08 18:38:20.744631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e97610 (9): Bad file descriptor 00:22:26.800 [2024-10-08 18:38:20.744640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23abb50 (9): Bad file descriptor 00:22:26.800 [2024-10-08 18:38:20.744649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d2930 (9): Bad file descriptor 00:22:26.800 [2024-10-08 18:38:20.744657] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.800 [2024-10-08 18:38:20.744664] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:26.800 [2024-10-08 18:38:20.744672] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.800 [2024-10-08 18:38:20.744683] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:26.801 [2024-10-08 18:38:20.744689] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:26.801 [2024-10-08 18:38:20.744696] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:26.801 [2024-10-08 18:38:20.744706] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:26.801 [2024-10-08 18:38:20.744713] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:26.801 [2024-10-08 18:38:20.744719] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:26.801 [2024-10-08 18:38:20.744731] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:26.801 [2024-10-08 18:38:20.744737] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:26.801 [2024-10-08 18:38:20.744744] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:26.801 [2024-10-08 18:38:20.744826] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.801 [2024-10-08 18:38:20.744838] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.801 [2024-10-08 18:38:20.744844] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.801 [2024-10-08 18:38:20.744850] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.801 [2024-10-08 18:38:20.745204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.801 [2024-10-08 18:38:20.745216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23abd30 with addr=10.0.0.2, port=4420 00:22:26.801 [2024-10-08 18:38:20.745224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23abd30 is same with the state(6) to be set 00:22:26.801 [2024-10-08 18:38:20.745531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.801 [2024-10-08 18:38:20.745541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dee10 with addr=10.0.0.2, port=4420 00:22:26.801 [2024-10-08 18:38:20.745548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dee10 is same with the state(6) to be set 00:22:26.801 [2024-10-08 18:38:20.745556] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:26.801 [2024-10-08 18:38:20.745562] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:26.801 [2024-10-08 18:38:20.745569] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:26.801 [2024-10-08 18:38:20.745580] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:26.801 [2024-10-08 18:38:20.745586] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:26.801 [2024-10-08 18:38:20.745595] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:26.801 [2024-10-08 18:38:20.745605] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:26.801 [2024-10-08 18:38:20.745611] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:26.801 [2024-10-08 18:38:20.745618] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:26.801 [2024-10-08 18:38:20.745628] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:26.801 [2024-10-08 18:38:20.745634] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:26.801 [2024-10-08 18:38:20.745641] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:26.801 [2024-10-08 18:38:20.745670] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.801 [2024-10-08 18:38:20.745677] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.801 [2024-10-08 18:38:20.745684] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.801 [2024-10-08 18:38:20.745690] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.801 [2024-10-08 18:38:20.745698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23abd30 (9): Bad file descriptor 00:22:26.801 [2024-10-08 18:38:20.745707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dee10 (9): Bad file descriptor 00:22:26.801 [2024-10-08 18:38:20.745735] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:26.801 [2024-10-08 18:38:20.745742] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:26.801 [2024-10-08 18:38:20.745749] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:26.801 [2024-10-08 18:38:20.745761] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:26.801 [2024-10-08 18:38:20.745768] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:26.801 [2024-10-08 18:38:20.745775] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:26.801 [2024-10-08 18:38:20.745803] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.801 [2024-10-08 18:38:20.745811] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:27.062 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1294074 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1294074 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1294074 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.006 rmmod nvme_tcp 00:22:28.006 rmmod nvme_fabrics 00:22:28.006 rmmod nvme_keyring 00:22:28.006 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1293841 ']' 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1293841 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1293841 ']' 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1293841 00:22:28.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1293841) - No such process 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1293841 is not found' 00:22:28.006 Process with pid 1293841 is not found 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.006 18:38:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:30.557 00:22:30.557 real 0m7.784s 00:22:30.557 user 0m18.994s 00:22:30.557 sys 0m1.203s 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.557 ************************************ 00:22:30.557 END TEST nvmf_shutdown_tc3 00:22:30.557 ************************************ 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:30.557 ************************************ 00:22:30.557 START TEST nvmf_shutdown_tc4 00:22:30.557 ************************************ 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:30.557 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:30.558 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:30.558 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:30.558 Found net devices under 0000:31:00.0: cvl_0_0 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:30.558 Found net devices under 0000:31:00.1: cvl_0_1 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.558 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:30.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:22:30.559 00:22:30.559 --- 10.0.0.2 ping statistics --- 00:22:30.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.559 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:22:30.559 00:22:30.559 --- 10.0.0.1 ping statistics --- 00:22:30.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.559 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1295359 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1295359 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1295359 ']' 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.559 18:38:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.820 [2024-10-08 18:38:24.636674] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:22:30.820 [2024-10-08 18:38:24.636742] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.820 [2024-10-08 18:38:24.723848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.820 [2024-10-08 18:38:24.784901] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.820 [2024-10-08 18:38:24.784933] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.820 [2024-10-08 18:38:24.784939] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.820 [2024-10-08 18:38:24.784944] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.820 [2024-10-08 18:38:24.784948] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.820 [2024-10-08 18:38:24.786416] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.820 [2024-10-08 18:38:24.786573] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.820 [2024-10-08 18:38:24.786721] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.820 [2024-10-08 18:38:24.786724] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:31.391 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.392 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:31.392 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:31.392 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.392 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:31.654 [2024-10-08 18:38:25.476414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.654 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:31.654 Malloc1 00:22:31.654 [2024-10-08 18:38:25.575135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.654 Malloc2 00:22:31.654 Malloc3 00:22:31.654 Malloc4 00:22:31.654 Malloc5 00:22:31.916 Malloc6 00:22:31.916 Malloc7 00:22:31.916 Malloc8 00:22:31.916 Malloc9 00:22:31.916 Malloc10 00:22:31.916 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.916 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:31.916 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.916 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:31.916 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1295746 00:22:31.916 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:31.916 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:32.176 [2024-10-08 18:38:26.044034] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:37.470 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.470 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1295359 00:22:37.470 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1295359 ']' 00:22:37.470 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1295359 00:22:37.470 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:22:37.470 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:37.470 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1295359 00:22:37.470 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:37.470 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:37.470 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1295359' 00:22:37.470 killing process with pid 1295359 00:22:37.470 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1295359 00:22:37.470 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1295359 00:22:37.470 [2024-10-08 18:38:31.050231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159ed00 is same with the state(6) to be set 00:22:37.470 [2024-10-08 18:38:31.050277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159ed00 is same with the state(6) to be set 00:22:37.470 [2024-10-08 18:38:31.050283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159ed00 is same with the state(6) to be set 00:22:37.470 [2024-10-08 18:38:31.050288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159ed00 is same with the state(6) to be set 00:22:37.470 [2024-10-08 18:38:31.050293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159ed00 is same with the state(6) to be set 00:22:37.470 [2024-10-08 18:38:31.050298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159ed00 is same with the state(6) to be set 00:22:37.470 [2024-10-08 18:38:31.050635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f1d0 is same with the state(6) to be set 00:22:37.470 [2024-10-08 18:38:31.050661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f1d0 is same with the state(6) to be set 00:22:37.470 [2024-10-08 18:38:31.050667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f1d0 is same with the state(6) to be set 00:22:37.470 [2024-10-08 18:38:31.050672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f1d0 is same with the state(6) to be set 00:22:37.470 [2024-10-08 18:38:31.050677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f1d0 is same with the state(6) to be set 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 [2024-10-08 18:38:31.053173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 starting I/O failed: -6 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.470 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 [2024-10-08 18:38:31.054032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 [2024-10-08 18:38:31.054887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1380 is same with the state(6) to be set 00:22:37.471 starting I/O failed: -6 00:22:37.471 [2024-10-08 18:38:31.054903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1380 is same with the state(6) to be set 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 [2024-10-08 18:38:31.054908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1380 is same with the state(6) to be set 00:22:37.471 [2024-10-08 18:38:31.054913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1380 is same with the state(6) to be set 00:22:37.471 starting I/O failed: -6 00:22:37.471 [2024-10-08 18:38:31.054919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1380 is same with the state(6) to be set 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 [2024-10-08 18:38:31.054925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1380 is same with the state(6) to be set 00:22:37.471 starting I/O failed: -6 00:22:37.471 [2024-10-08 18:38:31.054930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1380 is same with the state(6) to be set 00:22:37.471 [2024-10-08 18:38:31.054936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1380 is same with the state(6) to be set 00:22:37.471 [2024-10-08 18:38:31.054941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1380 is same with the state(6) to be set 00:22:37.471 [2024-10-08 18:38:31.054948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 [2024-10-08 18:38:31.055220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1850 is same with the state(6) to be set 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 [2024-10-08 18:38:31.055235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1850 is same with the state(6) to be set 00:22:37.471 starting I/O failed: -6 00:22:37.471 [2024-10-08 18:38:31.055240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1850 is same with the state(6) to be set 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 [2024-10-08 18:38:31.055245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1850 is same with the state(6) to be set 00:22:37.471 [2024-10-08 18:38:31.055251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1850 is same with the state(6) to be set 00:22:37.471 starting I/O failed: -6 00:22:37.471 [2024-10-08 18:38:31.055256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1850 is same with the state(6) to be set 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.471 Write completed with error (sct=0, sc=8) 00:22:37.471 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 [2024-10-08 18:38:31.055444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1d20 is same with the state(6) to be set 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 [2024-10-08 18:38:31.055465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1d20 is same with the state(6) to be set 00:22:37.472 starting I/O failed: -6 00:22:37.472 [2024-10-08 18:38:31.055471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1d20 is same with the state(6) to be set 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 [2024-10-08 18:38:31.055796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a0eb0 is same with Write completed with error (sct=0, sc=8) 00:22:37.472 the state(6) to be set 00:22:37.472 starting I/O failed: -6 00:22:37.472 [2024-10-08 18:38:31.055821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a0eb0 is same with the state(6) to be set 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 [2024-10-08 18:38:31.055827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a0eb0 is same with the state(6) to be set 00:22:37.472 starting I/O failed: -6 00:22:37.472 [2024-10-08 18:38:31.055832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a0eb0 is same with the state(6) to be set 00:22:37.472 [2024-10-08 18:38:31.055838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a0eb0 is same with the state(6) to be set 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 [2024-10-08 18:38:31.055843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a0eb0 is same with the state(6) to be set 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 [2024-10-08 18:38:31.056361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.472 NVMe io qpair process completion error 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 [2024-10-08 18:38:31.057317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 [2024-10-08 18:38:31.058240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.472 starting I/O failed: -6 00:22:37.472 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 [2024-10-08 18:38:31.059143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 [2024-10-08 18:38:31.060551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.473 NVMe io qpair process completion error 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 starting I/O failed: -6 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.473 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 [2024-10-08 18:38:31.061803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.474 starting I/O failed: -6 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 [2024-10-08 18:38:31.062646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 [2024-10-08 18:38:31.063584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.474 Write completed with error (sct=0, sc=8) 00:22:37.474 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 [2024-10-08 18:38:31.065742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.475 NVMe io qpair process completion error 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 [2024-10-08 18:38:31.066799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 [2024-10-08 18:38:31.067691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.475 starting I/O failed: -6 00:22:37.475 starting I/O failed: -6 00:22:37.475 starting I/O failed: -6 00:22:37.475 starting I/O failed: -6 00:22:37.475 starting I/O failed: -6 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.475 starting I/O failed: -6 00:22:37.475 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 [2024-10-08 18:38:31.068669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 [2024-10-08 18:38:31.071499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.476 NVMe io qpair process completion error 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 [2024-10-08 18:38:31.073005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 starting I/O failed: -6 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.476 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 [2024-10-08 18:38:31.073839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 [2024-10-08 18:38:31.074782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.477 starting I/O failed: -6 00:22:37.477 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 [2024-10-08 18:38:31.076448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.478 NVMe io qpair process completion error 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 [2024-10-08 18:38:31.077433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 [2024-10-08 18:38:31.078255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 Write completed with error (sct=0, sc=8) 00:22:37.478 starting I/O failed: -6 00:22:37.479 [2024-10-08 18:38:31.079198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 [2024-10-08 18:38:31.081845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.479 NVMe io qpair process completion error 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 [2024-10-08 18:38:31.083142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.479 starting I/O failed: -6 00:22:37.479 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 [2024-10-08 18:38:31.084066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 [2024-10-08 18:38:31.084983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.480 Write completed with error (sct=0, sc=8) 00:22:37.480 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 [2024-10-08 18:38:31.086391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.481 NVMe io qpair process completion error 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 [2024-10-08 18:38:31.087577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 [2024-10-08 18:38:31.088414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 [2024-10-08 18:38:31.089357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.481 starting I/O failed: -6 00:22:37.481 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 [2024-10-08 18:38:31.091910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.482 NVMe io qpair process completion error 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 [2024-10-08 18:38:31.093110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 starting I/O failed: -6 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.482 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 [2024-10-08 18:38:31.094010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 [2024-10-08 18:38:31.094903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 [2024-10-08 18:38:31.096523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.483 NVMe io qpair process completion error 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.483 starting I/O failed: -6 00:22:37.483 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 [2024-10-08 18:38:31.097572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 [2024-10-08 18:38:31.098394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 [2024-10-08 18:38:31.099344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.484 starting I/O failed: -6 00:22:37.484 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 Write completed with error (sct=0, sc=8) 00:22:37.485 starting I/O failed: -6 00:22:37.485 [2024-10-08 18:38:31.101263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.485 NVMe io qpair process completion error 00:22:37.485 Initializing NVMe Controllers 00:22:37.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:37.485 Controller IO queue size 128, less than required. 00:22:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:37.485 Controller IO queue size 128, less than required. 00:22:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:37.485 Controller IO queue size 128, less than required. 00:22:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:37.485 Controller IO queue size 128, less than required. 00:22:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:37.485 Controller IO queue size 128, less than required. 00:22:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:37.485 Controller IO queue size 128, less than required. 00:22:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:37.485 Controller IO queue size 128, less than required. 00:22:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:37.485 Controller IO queue size 128, less than required. 00:22:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:37.485 Controller IO queue size 128, less than required. 00:22:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:37.485 Controller IO queue size 128, less than required. 00:22:37.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:37.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:37.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:37.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:37.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:37.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:37.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:37.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:37.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:37.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:37.485 Initialization complete. Launching workers. 00:22:37.485 ======================================================== 00:22:37.485 Latency(us) 00:22:37.485 Device Information : IOPS MiB/s Average min max 00:22:37.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1853.20 79.63 69089.63 682.54 122424.93 00:22:37.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1859.78 79.91 68861.08 720.21 124129.32 00:22:37.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1889.05 81.17 67815.31 639.68 123625.07 00:22:37.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1892.65 81.32 67714.07 690.35 120555.11 00:22:37.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1897.11 81.52 67599.13 643.49 124765.53 00:22:37.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1904.32 81.83 67362.07 679.37 123748.36 00:22:37.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1881.41 80.84 68219.80 922.94 129151.50 00:22:37.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1886.29 81.05 68063.41 694.42 130855.34 00:22:37.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1859.78 79.91 69069.15 851.00 133718.57 00:22:37.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1872.08 80.44 68635.80 698.64 120041.47 00:22:37.485 ======================================================== 00:22:37.485 Total : 18795.66 807.63 68237.67 639.68 133718.57 00:22:37.485 00:22:37.485 [2024-10-08 18:38:31.106357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1121760 is same with the state(6) to be set 00:22:37.485 [2024-10-08 18:38:31.106404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111f8e0 is same with the state(6) to be set 00:22:37.485 [2024-10-08 18:38:31.106434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120810 is same with the state(6) to be set 00:22:37.485 [2024-10-08 18:38:31.106463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11201b0 is same with the state(6) to be set 00:22:37.485 [2024-10-08 18:38:31.106491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111f280 is same with the state(6) to be set 00:22:37.485 [2024-10-08 18:38:31.106519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1121430 is same with the state(6) to be set 00:22:37.485 [2024-10-08 18:38:31.106548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111ffd0 is same with the state(6) to be set 00:22:37.485 [2024-10-08 18:38:31.106577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111f5b0 is same with the state(6) to be set 00:22:37.485 [2024-10-08 18:38:31.106604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120b40 is same with the state(6) to be set 00:22:37.485 [2024-10-08 18:38:31.106632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11204e0 is same with the state(6) to be set 00:22:37.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:37.486 18:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1295746 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1295746 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1295746 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.428 rmmod nvme_tcp 00:22:38.428 rmmod nvme_fabrics 00:22:38.428 rmmod nvme_keyring 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1295359 ']' 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1295359 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1295359 ']' 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1295359 00:22:38.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1295359) - No such process 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1295359 is not found' 00:22:38.428 Process with pid 1295359 is not found 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.428 18:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.973 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:40.973 00:22:40.973 real 0m10.290s 00:22:40.973 user 0m27.989s 00:22:40.973 sys 0m3.854s 00:22:40.973 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:40.973 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:40.973 ************************************ 00:22:40.973 END TEST nvmf_shutdown_tc4 00:22:40.973 ************************************ 00:22:40.973 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:40.973 00:22:40.973 real 0m43.856s 00:22:40.973 user 1m45.709s 00:22:40.973 sys 0m13.957s 00:22:40.973 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:40.973 18:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:40.973 ************************************ 00:22:40.973 END TEST nvmf_shutdown 00:22:40.973 ************************************ 00:22:40.973 18:38:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:40.973 00:22:40.973 real 12m50.841s 00:22:40.973 user 27m0.244s 00:22:40.973 sys 3m50.711s 00:22:40.973 18:38:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:40.973 18:38:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:40.973 ************************************ 00:22:40.973 END TEST nvmf_target_extra 00:22:40.973 ************************************ 00:22:40.973 18:38:34 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:40.973 18:38:34 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:40.973 18:38:34 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:40.973 18:38:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:40.973 ************************************ 00:22:40.973 START TEST nvmf_host 00:22:40.973 ************************************ 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:40.973 * Looking for test storage... 00:22:40.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:40.973 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:40.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.974 --rc genhtml_branch_coverage=1 00:22:40.974 --rc genhtml_function_coverage=1 00:22:40.974 --rc genhtml_legend=1 00:22:40.974 --rc geninfo_all_blocks=1 00:22:40.974 --rc geninfo_unexecuted_blocks=1 00:22:40.974 00:22:40.974 ' 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:40.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.974 --rc genhtml_branch_coverage=1 00:22:40.974 --rc genhtml_function_coverage=1 00:22:40.974 --rc genhtml_legend=1 00:22:40.974 --rc geninfo_all_blocks=1 00:22:40.974 --rc geninfo_unexecuted_blocks=1 00:22:40.974 00:22:40.974 ' 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:40.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.974 --rc genhtml_branch_coverage=1 00:22:40.974 --rc genhtml_function_coverage=1 00:22:40.974 --rc genhtml_legend=1 00:22:40.974 --rc geninfo_all_blocks=1 00:22:40.974 --rc geninfo_unexecuted_blocks=1 00:22:40.974 00:22:40.974 ' 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:40.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.974 --rc genhtml_branch_coverage=1 00:22:40.974 --rc genhtml_function_coverage=1 00:22:40.974 --rc genhtml_legend=1 00:22:40.974 --rc geninfo_all_blocks=1 00:22:40.974 --rc geninfo_unexecuted_blocks=1 00:22:40.974 00:22:40.974 ' 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:40.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.974 ************************************ 00:22:40.974 START TEST nvmf_multicontroller 00:22:40.974 ************************************ 00:22:40.974 18:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:40.974 * Looking for test storage... 00:22:40.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:40.974 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:40.974 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:22:40.974 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:41.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.236 --rc genhtml_branch_coverage=1 00:22:41.236 --rc genhtml_function_coverage=1 00:22:41.236 --rc genhtml_legend=1 00:22:41.236 --rc geninfo_all_blocks=1 00:22:41.236 --rc geninfo_unexecuted_blocks=1 00:22:41.236 00:22:41.236 ' 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:41.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.236 --rc genhtml_branch_coverage=1 00:22:41.236 --rc genhtml_function_coverage=1 00:22:41.236 --rc genhtml_legend=1 00:22:41.236 --rc geninfo_all_blocks=1 00:22:41.236 --rc geninfo_unexecuted_blocks=1 00:22:41.236 00:22:41.236 ' 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:41.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.236 --rc genhtml_branch_coverage=1 00:22:41.236 --rc genhtml_function_coverage=1 00:22:41.236 --rc genhtml_legend=1 00:22:41.236 --rc geninfo_all_blocks=1 00:22:41.236 --rc geninfo_unexecuted_blocks=1 00:22:41.236 00:22:41.236 ' 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:41.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.236 --rc genhtml_branch_coverage=1 00:22:41.236 --rc genhtml_function_coverage=1 00:22:41.236 --rc genhtml_legend=1 00:22:41.236 --rc geninfo_all_blocks=1 00:22:41.236 --rc geninfo_unexecuted_blocks=1 00:22:41.236 00:22:41.236 ' 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.236 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.237 18:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.379 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:49.380 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:49.380 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:49.380 Found net devices under 0000:31:00.0: cvl_0_0 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:49.380 Found net devices under 0000:31:00.1: cvl_0_1 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:22:49.380 00:22:49.380 --- 10.0.0.2 ping statistics --- 00:22:49.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.380 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:22:49.380 00:22:49.380 --- 10.0.0.1 ping statistics --- 00:22:49.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.380 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1301304 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1301304 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1301304 ']' 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.380 18:38:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.380 [2024-10-08 18:38:42.916462] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:22:49.380 [2024-10-08 18:38:42.916527] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.380 [2024-10-08 18:38:43.006048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:49.381 [2024-10-08 18:38:43.099608] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.381 [2024-10-08 18:38:43.099662] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.381 [2024-10-08 18:38:43.099671] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.381 [2024-10-08 18:38:43.099678] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.381 [2024-10-08 18:38:43.099684] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.381 [2024-10-08 18:38:43.101168] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.381 [2024-10-08 18:38:43.101327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.381 [2024-10-08 18:38:43.101328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 [2024-10-08 18:38:43.803879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 Malloc0 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 [2024-10-08 18:38:43.874504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 [2024-10-08 18:38:43.886434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 Malloc1 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.953 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.954 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1301568 00:22:49.954 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.954 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:49.954 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1301568 /var/tmp/bdevperf.sock 00:22:49.954 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1301568 ']' 00:22:49.954 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.954 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.954 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.954 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.954 18:38:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.897 18:38:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.897 18:38:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:50.897 18:38:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:50.897 18:38:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.897 18:38:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.158 NVMe0n1 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.158 1 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.158 request: 00:22:51.158 { 00:22:51.158 "name": "NVMe0", 00:22:51.158 "trtype": "tcp", 00:22:51.158 "traddr": "10.0.0.2", 00:22:51.158 "adrfam": "ipv4", 00:22:51.158 "trsvcid": "4420", 00:22:51.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.158 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:51.158 "hostaddr": "10.0.0.1", 00:22:51.158 "prchk_reftag": false, 00:22:51.158 "prchk_guard": false, 00:22:51.158 "hdgst": false, 00:22:51.158 "ddgst": false, 00:22:51.158 "allow_unrecognized_csi": false, 00:22:51.158 "method": "bdev_nvme_attach_controller", 00:22:51.158 "req_id": 1 00:22:51.158 } 00:22:51.158 Got JSON-RPC error response 00:22:51.158 response: 00:22:51.158 { 00:22:51.158 "code": -114, 00:22:51.158 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:51.158 } 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:51.158 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.159 request: 00:22:51.159 { 00:22:51.159 "name": "NVMe0", 00:22:51.159 "trtype": "tcp", 00:22:51.159 "traddr": "10.0.0.2", 00:22:51.159 "adrfam": "ipv4", 00:22:51.159 "trsvcid": "4420", 00:22:51.159 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.159 "hostaddr": "10.0.0.1", 00:22:51.159 "prchk_reftag": false, 00:22:51.159 "prchk_guard": false, 00:22:51.159 "hdgst": false, 00:22:51.159 "ddgst": false, 00:22:51.159 "allow_unrecognized_csi": false, 00:22:51.159 "method": "bdev_nvme_attach_controller", 00:22:51.159 "req_id": 1 00:22:51.159 } 00:22:51.159 Got JSON-RPC error response 00:22:51.159 response: 00:22:51.159 { 00:22:51.159 "code": -114, 00:22:51.159 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:51.159 } 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.159 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.420 request: 00:22:51.420 { 00:22:51.420 "name": "NVMe0", 00:22:51.420 "trtype": "tcp", 00:22:51.420 "traddr": "10.0.0.2", 00:22:51.420 "adrfam": "ipv4", 00:22:51.420 "trsvcid": "4420", 00:22:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.420 "hostaddr": "10.0.0.1", 00:22:51.420 "prchk_reftag": false, 00:22:51.420 "prchk_guard": false, 00:22:51.420 "hdgst": false, 00:22:51.420 "ddgst": false, 00:22:51.420 "multipath": "disable", 00:22:51.420 "allow_unrecognized_csi": false, 00:22:51.420 "method": "bdev_nvme_attach_controller", 00:22:51.420 "req_id": 1 00:22:51.420 } 00:22:51.420 Got JSON-RPC error response 00:22:51.420 response: 00:22:51.420 { 00:22:51.420 "code": -114, 00:22:51.420 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:51.420 } 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.420 request: 00:22:51.420 { 00:22:51.420 "name": "NVMe0", 00:22:51.420 "trtype": "tcp", 00:22:51.420 "traddr": "10.0.0.2", 00:22:51.420 "adrfam": "ipv4", 00:22:51.420 "trsvcid": "4420", 00:22:51.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.420 "hostaddr": "10.0.0.1", 00:22:51.420 "prchk_reftag": false, 00:22:51.420 "prchk_guard": false, 00:22:51.420 "hdgst": false, 00:22:51.420 "ddgst": false, 00:22:51.420 "multipath": "failover", 00:22:51.420 "allow_unrecognized_csi": false, 00:22:51.420 "method": "bdev_nvme_attach_controller", 00:22:51.420 "req_id": 1 00:22:51.420 } 00:22:51.420 Got JSON-RPC error response 00:22:51.420 response: 00:22:51.420 { 00:22:51.420 "code": -114, 00:22:51.420 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:51.420 } 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.420 NVMe0n1 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.420 00:22:51.420 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.681 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.681 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:51.681 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.681 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.681 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.681 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:51.681 18:38:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:52.621 { 00:22:52.621 "results": [ 00:22:52.621 { 00:22:52.621 "job": "NVMe0n1", 00:22:52.621 "core_mask": "0x1", 00:22:52.621 "workload": "write", 00:22:52.621 "status": "finished", 00:22:52.621 "queue_depth": 128, 00:22:52.621 "io_size": 4096, 00:22:52.621 "runtime": 1.005549, 00:22:52.621 "iops": 25830.665636383706, 00:22:52.621 "mibps": 100.90103764212385, 00:22:52.621 "io_failed": 0, 00:22:52.621 "io_timeout": 0, 00:22:52.621 "avg_latency_us": 4944.222400862401, 00:22:52.621 "min_latency_us": 2389.3333333333335, 00:22:52.621 "max_latency_us": 14636.373333333333 00:22:52.621 } 00:22:52.621 ], 00:22:52.621 "core_count": 1 00:22:52.621 } 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1301568 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1301568 ']' 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1301568 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:52.621 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1301568 00:22:52.881 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:52.881 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:52.881 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1301568' 00:22:52.881 killing process with pid 1301568 00:22:52.881 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1301568 00:22:52.881 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1301568 00:22:52.881 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.881 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.881 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.881 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:52.882 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:52.882 [2024-10-08 18:38:44.018523] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:22:52.882 [2024-10-08 18:38:44.018602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301568 ] 00:22:52.882 [2024-10-08 18:38:44.103418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.882 [2024-10-08 18:38:44.199642] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.882 [2024-10-08 18:38:45.471956] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name e32278b7-c387-4745-8d84-4ba7a9d3eaa1 already exists 00:22:52.882 [2024-10-08 18:38:45.472009] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:e32278b7-c387-4745-8d84-4ba7a9d3eaa1 alias for bdev NVMe1n1 00:22:52.882 [2024-10-08 18:38:45.472019] bdev_nvme.c:4559:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:52.882 Running I/O for 1 seconds... 00:22:52.882 25781.00 IOPS, 100.71 MiB/s 00:22:52.882 Latency(us) 00:22:52.882 [2024-10-08T16:38:46.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.882 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:52.882 NVMe0n1 : 1.01 25830.67 100.90 0.00 0.00 4944.22 2389.33 14636.37 00:22:52.882 [2024-10-08T16:38:46.939Z] =================================================================================================================== 00:22:52.882 [2024-10-08T16:38:46.939Z] Total : 25830.67 100.90 0.00 0.00 4944.22 2389.33 14636.37 00:22:52.882 Received shutdown signal, test time was about 1.000000 seconds 00:22:52.882 00:22:52.882 Latency(us) 00:22:52.882 [2024-10-08T16:38:46.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.882 [2024-10-08T16:38:46.939Z] =================================================================================================================== 00:22:52.882 [2024-10-08T16:38:46.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.882 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.882 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.882 rmmod nvme_tcp 00:22:52.882 rmmod nvme_fabrics 00:22:52.882 rmmod nvme_keyring 00:22:53.143 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.143 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:53.143 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:53.143 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1301304 ']' 00:22:53.143 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1301304 00:22:53.143 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1301304 ']' 00:22:53.143 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1301304 00:22:53.143 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:53.143 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.143 18:38:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1301304 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1301304' 00:22:53.143 killing process with pid 1301304 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1301304 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1301304 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.143 18:38:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:55.686 00:22:55.686 real 0m14.334s 00:22:55.686 user 0m17.630s 00:22:55.686 sys 0m6.664s 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.686 ************************************ 00:22:55.686 END TEST nvmf_multicontroller 00:22:55.686 ************************************ 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.686 ************************************ 00:22:55.686 START TEST nvmf_aer 00:22:55.686 ************************************ 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:55.686 * Looking for test storage... 00:22:55.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:55.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.686 --rc genhtml_branch_coverage=1 00:22:55.686 --rc genhtml_function_coverage=1 00:22:55.686 --rc genhtml_legend=1 00:22:55.686 --rc geninfo_all_blocks=1 00:22:55.686 --rc geninfo_unexecuted_blocks=1 00:22:55.686 00:22:55.686 ' 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:55.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.686 --rc genhtml_branch_coverage=1 00:22:55.686 --rc genhtml_function_coverage=1 00:22:55.686 --rc genhtml_legend=1 00:22:55.686 --rc geninfo_all_blocks=1 00:22:55.686 --rc geninfo_unexecuted_blocks=1 00:22:55.686 00:22:55.686 ' 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:55.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.686 --rc genhtml_branch_coverage=1 00:22:55.686 --rc genhtml_function_coverage=1 00:22:55.686 --rc genhtml_legend=1 00:22:55.686 --rc geninfo_all_blocks=1 00:22:55.686 --rc geninfo_unexecuted_blocks=1 00:22:55.686 00:22:55.686 ' 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:55.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.686 --rc genhtml_branch_coverage=1 00:22:55.686 --rc genhtml_function_coverage=1 00:22:55.686 --rc genhtml_legend=1 00:22:55.686 --rc geninfo_all_blocks=1 00:22:55.686 --rc geninfo_unexecuted_blocks=1 00:22:55.686 00:22:55.686 ' 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:55.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:55.686 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:55.687 18:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:03.826 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:03.826 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:03.826 Found net devices under 0000:31:00.0: cvl_0_0 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:03.826 Found net devices under 0000:31:00.1: cvl_0_1 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.826 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:03.827 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:03.827 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.827 18:38:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:03.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:23:03.827 00:23:03.827 --- 10.0.0.2 ping statistics --- 00:23:03.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.827 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:23:03.827 00:23:03.827 --- 10.0.0.1 ping statistics --- 00:23:03.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.827 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1306422 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1306422 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1306422 ']' 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:03.827 18:38:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.827 [2024-10-08 18:38:57.302472] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:23:03.827 [2024-10-08 18:38:57.302538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.827 [2024-10-08 18:38:57.392429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:03.827 [2024-10-08 18:38:57.488492] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.827 [2024-10-08 18:38:57.488553] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.827 [2024-10-08 18:38:57.488561] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.827 [2024-10-08 18:38:57.488569] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.827 [2024-10-08 18:38:57.488575] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.827 [2024-10-08 18:38:57.490658] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.827 [2024-10-08 18:38:57.490822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.827 [2024-10-08 18:38:57.491036] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.827 [2024-10-08 18:38:57.491036] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.088 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.088 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:04.088 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:04.088 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:04.088 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.349 [2024-10-08 18:38:58.184361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.349 Malloc0 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.349 [2024-10-08 18:38:58.250038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.349 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.350 [ 00:23:04.350 { 00:23:04.350 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:04.350 "subtype": "Discovery", 00:23:04.350 "listen_addresses": [], 00:23:04.350 "allow_any_host": true, 00:23:04.350 "hosts": [] 00:23:04.350 }, 00:23:04.350 { 00:23:04.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.350 "subtype": "NVMe", 00:23:04.350 "listen_addresses": [ 00:23:04.350 { 00:23:04.350 "trtype": "TCP", 00:23:04.350 "adrfam": "IPv4", 00:23:04.350 "traddr": "10.0.0.2", 00:23:04.350 "trsvcid": "4420" 00:23:04.350 } 00:23:04.350 ], 00:23:04.350 "allow_any_host": true, 00:23:04.350 "hosts": [], 00:23:04.350 "serial_number": "SPDK00000000000001", 00:23:04.350 "model_number": "SPDK bdev Controller", 00:23:04.350 "max_namespaces": 2, 00:23:04.350 "min_cntlid": 1, 00:23:04.350 "max_cntlid": 65519, 00:23:04.350 "namespaces": [ 00:23:04.350 { 00:23:04.350 "nsid": 1, 00:23:04.350 "bdev_name": "Malloc0", 00:23:04.350 "name": "Malloc0", 00:23:04.350 "nguid": "C2C1EB7003EC435688712F3E1A3E22A5", 00:23:04.350 "uuid": "c2c1eb70-03ec-4356-8871-2f3e1a3e22a5" 00:23:04.350 } 00:23:04.350 ] 00:23:04.350 } 00:23:04.350 ] 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1306674 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:04.350 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.610 Malloc1 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.610 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.610 Asynchronous Event Request test 00:23:04.610 Attaching to 10.0.0.2 00:23:04.610 Attached to 10.0.0.2 00:23:04.610 Registering asynchronous event callbacks... 00:23:04.610 Starting namespace attribute notice tests for all controllers... 00:23:04.610 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:04.610 aer_cb - Changed Namespace 00:23:04.610 Cleaning up... 00:23:04.611 [ 00:23:04.611 { 00:23:04.611 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:04.611 "subtype": "Discovery", 00:23:04.611 "listen_addresses": [], 00:23:04.611 "allow_any_host": true, 00:23:04.611 "hosts": [] 00:23:04.611 }, 00:23:04.611 { 00:23:04.611 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.611 "subtype": "NVMe", 00:23:04.611 "listen_addresses": [ 00:23:04.611 { 00:23:04.611 "trtype": "TCP", 00:23:04.611 "adrfam": "IPv4", 00:23:04.611 "traddr": "10.0.0.2", 00:23:04.611 "trsvcid": "4420" 00:23:04.611 } 00:23:04.611 ], 00:23:04.611 "allow_any_host": true, 00:23:04.611 "hosts": [], 00:23:04.611 "serial_number": "SPDK00000000000001", 00:23:04.611 "model_number": "SPDK bdev Controller", 00:23:04.611 "max_namespaces": 2, 00:23:04.611 "min_cntlid": 1, 00:23:04.611 "max_cntlid": 65519, 00:23:04.611 "namespaces": [ 00:23:04.611 { 00:23:04.611 "nsid": 1, 00:23:04.611 "bdev_name": "Malloc0", 00:23:04.611 "name": "Malloc0", 00:23:04.611 "nguid": "C2C1EB7003EC435688712F3E1A3E22A5", 00:23:04.611 "uuid": "c2c1eb70-03ec-4356-8871-2f3e1a3e22a5" 00:23:04.611 }, 00:23:04.611 { 00:23:04.611 "nsid": 2, 00:23:04.611 "bdev_name": "Malloc1", 00:23:04.611 "name": "Malloc1", 00:23:04.611 "nguid": "A61EE7C0EBF748F187BFE7202BEAB567", 00:23:04.611 "uuid": "a61ee7c0-ebf7-48f1-87bf-e7202beab567" 00:23:04.611 } 00:23:04.611 ] 00:23:04.611 } 00:23:04.611 ] 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1306674 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.611 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.611 rmmod nvme_tcp 00:23:04.611 rmmod nvme_fabrics 00:23:04.870 rmmod nvme_keyring 00:23:04.870 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.870 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:04.870 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:04.870 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1306422 ']' 00:23:04.870 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1306422 00:23:04.870 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1306422 ']' 00:23:04.870 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1306422 00:23:04.870 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:04.870 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.871 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1306422 00:23:04.871 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:04.871 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:04.871 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1306422' 00:23:04.871 killing process with pid 1306422 00:23:04.871 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1306422 00:23:04.871 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1306422 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.131 18:38:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.042 18:39:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.042 00:23:07.042 real 0m11.724s 00:23:07.042 user 0m8.159s 00:23:07.042 sys 0m6.281s 00:23:07.042 18:39:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.042 18:39:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.042 ************************************ 00:23:07.042 END TEST nvmf_aer 00:23:07.042 ************************************ 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.303 ************************************ 00:23:07.303 START TEST nvmf_async_init 00:23:07.303 ************************************ 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:07.303 * Looking for test storage... 00:23:07.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.303 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:07.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.303 --rc genhtml_branch_coverage=1 00:23:07.303 --rc genhtml_function_coverage=1 00:23:07.303 --rc genhtml_legend=1 00:23:07.303 --rc geninfo_all_blocks=1 00:23:07.304 --rc geninfo_unexecuted_blocks=1 00:23:07.304 00:23:07.304 ' 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:07.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.304 --rc genhtml_branch_coverage=1 00:23:07.304 --rc genhtml_function_coverage=1 00:23:07.304 --rc genhtml_legend=1 00:23:07.304 --rc geninfo_all_blocks=1 00:23:07.304 --rc geninfo_unexecuted_blocks=1 00:23:07.304 00:23:07.304 ' 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:07.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.304 --rc genhtml_branch_coverage=1 00:23:07.304 --rc genhtml_function_coverage=1 00:23:07.304 --rc genhtml_legend=1 00:23:07.304 --rc geninfo_all_blocks=1 00:23:07.304 --rc geninfo_unexecuted_blocks=1 00:23:07.304 00:23:07.304 ' 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:07.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.304 --rc genhtml_branch_coverage=1 00:23:07.304 --rc genhtml_function_coverage=1 00:23:07.304 --rc genhtml_legend=1 00:23:07.304 --rc geninfo_all_blocks=1 00:23:07.304 --rc geninfo_unexecuted_blocks=1 00:23:07.304 00:23:07.304 ' 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.304 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5f65cbf410dc4576926e2e95f765da69 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.574 18:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:15.718 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:15.718 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:15.718 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:15.719 Found net devices under 0000:31:00.0: cvl_0_0 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:15.719 Found net devices under 0000:31:00.1: cvl_0_1 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.719 18:39:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:23:15.719 00:23:15.719 --- 10.0.0.2 ping statistics --- 00:23:15.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.719 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:23:15.719 00:23:15.719 --- 10.0.0.1 ping statistics --- 00:23:15.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.719 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1311390 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1311390 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1311390 ']' 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.719 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.719 [2024-10-08 18:39:09.150145] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:23:15.719 [2024-10-08 18:39:09.150211] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.719 [2024-10-08 18:39:09.241175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.719 [2024-10-08 18:39:09.334982] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.719 [2024-10-08 18:39:09.335044] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.719 [2024-10-08 18:39:09.335053] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.719 [2024-10-08 18:39:09.335060] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.719 [2024-10-08 18:39:09.335067] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.719 [2024-10-08 18:39:09.335899] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.980 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.980 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:15.980 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:15.980 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.980 18:39:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.980 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.980 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:15.980 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.980 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.980 [2024-10-08 18:39:10.021909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.980 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.980 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:15.980 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.980 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.241 null0 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5f65cbf410dc4576926e2e95f765da69 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.241 [2024-10-08 18:39:10.082288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.241 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.501 nvme0n1 00:23:16.501 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.501 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:16.501 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.501 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.501 [ 00:23:16.501 { 00:23:16.501 "name": "nvme0n1", 00:23:16.501 "aliases": [ 00:23:16.501 "5f65cbf4-10dc-4576-926e-2e95f765da69" 00:23:16.501 ], 00:23:16.501 "product_name": "NVMe disk", 00:23:16.501 "block_size": 512, 00:23:16.501 "num_blocks": 2097152, 00:23:16.501 "uuid": "5f65cbf4-10dc-4576-926e-2e95f765da69", 00:23:16.501 "numa_id": 0, 00:23:16.501 "assigned_rate_limits": { 00:23:16.501 "rw_ios_per_sec": 0, 00:23:16.501 "rw_mbytes_per_sec": 0, 00:23:16.501 "r_mbytes_per_sec": 0, 00:23:16.501 "w_mbytes_per_sec": 0 00:23:16.501 }, 00:23:16.501 "claimed": false, 00:23:16.501 "zoned": false, 00:23:16.501 "supported_io_types": { 00:23:16.501 "read": true, 00:23:16.501 "write": true, 00:23:16.501 "unmap": false, 00:23:16.501 "flush": true, 00:23:16.501 "reset": true, 00:23:16.501 "nvme_admin": true, 00:23:16.501 "nvme_io": true, 00:23:16.501 "nvme_io_md": false, 00:23:16.501 "write_zeroes": true, 00:23:16.501 "zcopy": false, 00:23:16.501 "get_zone_info": false, 00:23:16.501 "zone_management": false, 00:23:16.501 "zone_append": false, 00:23:16.501 "compare": true, 00:23:16.501 "compare_and_write": true, 00:23:16.501 "abort": true, 00:23:16.501 "seek_hole": false, 00:23:16.501 "seek_data": false, 00:23:16.501 "copy": true, 00:23:16.501 "nvme_iov_md": false 00:23:16.501 }, 00:23:16.501 "memory_domains": [ 00:23:16.501 { 00:23:16.501 "dma_device_id": "system", 00:23:16.501 "dma_device_type": 1 00:23:16.502 } 00:23:16.502 ], 00:23:16.502 "driver_specific": { 00:23:16.502 "nvme": [ 00:23:16.502 { 00:23:16.502 "trid": { 00:23:16.502 "trtype": "TCP", 00:23:16.502 "adrfam": "IPv4", 00:23:16.502 "traddr": "10.0.0.2", 00:23:16.502 "trsvcid": "4420", 00:23:16.502 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:16.502 }, 00:23:16.502 "ctrlr_data": { 00:23:16.502 "cntlid": 1, 00:23:16.502 "vendor_id": "0x8086", 00:23:16.502 "model_number": "SPDK bdev Controller", 00:23:16.502 "serial_number": "00000000000000000000", 00:23:16.502 "firmware_revision": "25.01", 00:23:16.502 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.502 "oacs": { 00:23:16.502 "security": 0, 00:23:16.502 "format": 0, 00:23:16.502 "firmware": 0, 00:23:16.502 "ns_manage": 0 00:23:16.502 }, 00:23:16.502 "multi_ctrlr": true, 00:23:16.502 "ana_reporting": false 00:23:16.502 }, 00:23:16.502 "vs": { 00:23:16.502 "nvme_version": "1.3" 00:23:16.502 }, 00:23:16.502 "ns_data": { 00:23:16.502 "id": 1, 00:23:16.502 "can_share": true 00:23:16.502 } 00:23:16.502 } 00:23:16.502 ], 00:23:16.502 "mp_policy": "active_passive" 00:23:16.502 } 00:23:16.502 } 00:23:16.502 ] 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.502 [2024-10-08 18:39:10.358743] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.502 [2024-10-08 18:39:10.358828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92bc10 (9): Bad file descriptor 00:23:16.502 [2024-10-08 18:39:10.491088] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.502 [ 00:23:16.502 { 00:23:16.502 "name": "nvme0n1", 00:23:16.502 "aliases": [ 00:23:16.502 "5f65cbf4-10dc-4576-926e-2e95f765da69" 00:23:16.502 ], 00:23:16.502 "product_name": "NVMe disk", 00:23:16.502 "block_size": 512, 00:23:16.502 "num_blocks": 2097152, 00:23:16.502 "uuid": "5f65cbf4-10dc-4576-926e-2e95f765da69", 00:23:16.502 "numa_id": 0, 00:23:16.502 "assigned_rate_limits": { 00:23:16.502 "rw_ios_per_sec": 0, 00:23:16.502 "rw_mbytes_per_sec": 0, 00:23:16.502 "r_mbytes_per_sec": 0, 00:23:16.502 "w_mbytes_per_sec": 0 00:23:16.502 }, 00:23:16.502 "claimed": false, 00:23:16.502 "zoned": false, 00:23:16.502 "supported_io_types": { 00:23:16.502 "read": true, 00:23:16.502 "write": true, 00:23:16.502 "unmap": false, 00:23:16.502 "flush": true, 00:23:16.502 "reset": true, 00:23:16.502 "nvme_admin": true, 00:23:16.502 "nvme_io": true, 00:23:16.502 "nvme_io_md": false, 00:23:16.502 "write_zeroes": true, 00:23:16.502 "zcopy": false, 00:23:16.502 "get_zone_info": false, 00:23:16.502 "zone_management": false, 00:23:16.502 "zone_append": false, 00:23:16.502 "compare": true, 00:23:16.502 "compare_and_write": true, 00:23:16.502 "abort": true, 00:23:16.502 "seek_hole": false, 00:23:16.502 "seek_data": false, 00:23:16.502 "copy": true, 00:23:16.502 "nvme_iov_md": false 00:23:16.502 }, 00:23:16.502 "memory_domains": [ 00:23:16.502 { 00:23:16.502 "dma_device_id": "system", 00:23:16.502 "dma_device_type": 1 00:23:16.502 } 00:23:16.502 ], 00:23:16.502 "driver_specific": { 00:23:16.502 "nvme": [ 00:23:16.502 { 00:23:16.502 "trid": { 00:23:16.502 "trtype": "TCP", 00:23:16.502 "adrfam": "IPv4", 00:23:16.502 "traddr": "10.0.0.2", 00:23:16.502 "trsvcid": "4420", 00:23:16.502 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:16.502 }, 00:23:16.502 "ctrlr_data": { 00:23:16.502 "cntlid": 2, 00:23:16.502 "vendor_id": "0x8086", 00:23:16.502 "model_number": "SPDK bdev Controller", 00:23:16.502 "serial_number": "00000000000000000000", 00:23:16.502 "firmware_revision": "25.01", 00:23:16.502 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.502 "oacs": { 00:23:16.502 "security": 0, 00:23:16.502 "format": 0, 00:23:16.502 "firmware": 0, 00:23:16.502 "ns_manage": 0 00:23:16.502 }, 00:23:16.502 "multi_ctrlr": true, 00:23:16.502 "ana_reporting": false 00:23:16.502 }, 00:23:16.502 "vs": { 00:23:16.502 "nvme_version": "1.3" 00:23:16.502 }, 00:23:16.502 "ns_data": { 00:23:16.502 "id": 1, 00:23:16.502 "can_share": true 00:23:16.502 } 00:23:16.502 } 00:23:16.502 ], 00:23:16.502 "mp_policy": "active_passive" 00:23:16.502 } 00:23:16.502 } 00:23:16.502 ] 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.RZZ7vKKodK 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.RZZ7vKKodK 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.RZZ7vKKodK 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.502 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.763 [2024-10-08 18:39:10.579468] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.763 [2024-10-08 18:39:10.579637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.763 [2024-10-08 18:39:10.603545] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.763 nvme0n1 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.763 [ 00:23:16.763 { 00:23:16.763 "name": "nvme0n1", 00:23:16.763 "aliases": [ 00:23:16.763 "5f65cbf4-10dc-4576-926e-2e95f765da69" 00:23:16.763 ], 00:23:16.763 "product_name": "NVMe disk", 00:23:16.763 "block_size": 512, 00:23:16.763 "num_blocks": 2097152, 00:23:16.763 "uuid": "5f65cbf4-10dc-4576-926e-2e95f765da69", 00:23:16.763 "numa_id": 0, 00:23:16.763 "assigned_rate_limits": { 00:23:16.763 "rw_ios_per_sec": 0, 00:23:16.763 "rw_mbytes_per_sec": 0, 00:23:16.763 "r_mbytes_per_sec": 0, 00:23:16.763 "w_mbytes_per_sec": 0 00:23:16.763 }, 00:23:16.763 "claimed": false, 00:23:16.763 "zoned": false, 00:23:16.763 "supported_io_types": { 00:23:16.763 "read": true, 00:23:16.763 "write": true, 00:23:16.763 "unmap": false, 00:23:16.763 "flush": true, 00:23:16.763 "reset": true, 00:23:16.763 "nvme_admin": true, 00:23:16.763 "nvme_io": true, 00:23:16.763 "nvme_io_md": false, 00:23:16.763 "write_zeroes": true, 00:23:16.763 "zcopy": false, 00:23:16.763 "get_zone_info": false, 00:23:16.763 "zone_management": false, 00:23:16.763 "zone_append": false, 00:23:16.763 "compare": true, 00:23:16.763 "compare_and_write": true, 00:23:16.763 "abort": true, 00:23:16.763 "seek_hole": false, 00:23:16.763 "seek_data": false, 00:23:16.763 "copy": true, 00:23:16.763 "nvme_iov_md": false 00:23:16.763 }, 00:23:16.763 "memory_domains": [ 00:23:16.763 { 00:23:16.763 "dma_device_id": "system", 00:23:16.763 "dma_device_type": 1 00:23:16.763 } 00:23:16.763 ], 00:23:16.763 "driver_specific": { 00:23:16.763 "nvme": [ 00:23:16.763 { 00:23:16.763 "trid": { 00:23:16.763 "trtype": "TCP", 00:23:16.763 "adrfam": "IPv4", 00:23:16.763 "traddr": "10.0.0.2", 00:23:16.763 "trsvcid": "4421", 00:23:16.763 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:16.763 }, 00:23:16.763 "ctrlr_data": { 00:23:16.763 "cntlid": 3, 00:23:16.763 "vendor_id": "0x8086", 00:23:16.763 "model_number": "SPDK bdev Controller", 00:23:16.763 "serial_number": "00000000000000000000", 00:23:16.763 "firmware_revision": "25.01", 00:23:16.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.763 "oacs": { 00:23:16.763 "security": 0, 00:23:16.763 "format": 0, 00:23:16.763 "firmware": 0, 00:23:16.763 "ns_manage": 0 00:23:16.763 }, 00:23:16.763 "multi_ctrlr": true, 00:23:16.763 "ana_reporting": false 00:23:16.763 }, 00:23:16.763 "vs": { 00:23:16.763 "nvme_version": "1.3" 00:23:16.763 }, 00:23:16.763 "ns_data": { 00:23:16.763 "id": 1, 00:23:16.763 "can_share": true 00:23:16.763 } 00:23:16.763 } 00:23:16.763 ], 00:23:16.763 "mp_policy": "active_passive" 00:23:16.763 } 00:23:16.763 } 00:23:16.763 ] 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.RZZ7vKKodK 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:16.763 rmmod nvme_tcp 00:23:16.763 rmmod nvme_fabrics 00:23:16.763 rmmod nvme_keyring 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1311390 ']' 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1311390 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1311390 ']' 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1311390 00:23:16.763 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:16.764 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:16.764 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1311390 00:23:17.023 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:17.023 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:17.023 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1311390' 00:23:17.023 killing process with pid 1311390 00:23:17.024 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1311390 00:23:17.024 18:39:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1311390 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.024 18:39:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:19.609 00:23:19.609 real 0m11.977s 00:23:19.609 user 0m4.277s 00:23:19.609 sys 0m6.255s 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:19.609 ************************************ 00:23:19.609 END TEST nvmf_async_init 00:23:19.609 ************************************ 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.609 ************************************ 00:23:19.609 START TEST dma 00:23:19.609 ************************************ 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:19.609 * Looking for test storage... 00:23:19.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.609 --rc genhtml_branch_coverage=1 00:23:19.609 --rc genhtml_function_coverage=1 00:23:19.609 --rc genhtml_legend=1 00:23:19.609 --rc geninfo_all_blocks=1 00:23:19.609 --rc geninfo_unexecuted_blocks=1 00:23:19.609 00:23:19.609 ' 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.609 --rc genhtml_branch_coverage=1 00:23:19.609 --rc genhtml_function_coverage=1 00:23:19.609 --rc genhtml_legend=1 00:23:19.609 --rc geninfo_all_blocks=1 00:23:19.609 --rc geninfo_unexecuted_blocks=1 00:23:19.609 00:23:19.609 ' 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.609 --rc genhtml_branch_coverage=1 00:23:19.609 --rc genhtml_function_coverage=1 00:23:19.609 --rc genhtml_legend=1 00:23:19.609 --rc geninfo_all_blocks=1 00:23:19.609 --rc geninfo_unexecuted_blocks=1 00:23:19.609 00:23:19.609 ' 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.609 --rc genhtml_branch_coverage=1 00:23:19.609 --rc genhtml_function_coverage=1 00:23:19.609 --rc genhtml_legend=1 00:23:19.609 --rc geninfo_all_blocks=1 00:23:19.609 --rc geninfo_unexecuted_blocks=1 00:23:19.609 00:23:19.609 ' 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:19.609 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:19.610 00:23:19.610 real 0m0.237s 00:23:19.610 user 0m0.137s 00:23:19.610 sys 0m0.115s 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:19.610 ************************************ 00:23:19.610 END TEST dma 00:23:19.610 ************************************ 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.610 ************************************ 00:23:19.610 START TEST nvmf_identify 00:23:19.610 ************************************ 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:19.610 * Looking for test storage... 00:23:19.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:23:19.610 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:19.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.903 --rc genhtml_branch_coverage=1 00:23:19.903 --rc genhtml_function_coverage=1 00:23:19.903 --rc genhtml_legend=1 00:23:19.903 --rc geninfo_all_blocks=1 00:23:19.903 --rc geninfo_unexecuted_blocks=1 00:23:19.903 00:23:19.903 ' 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:19.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.903 --rc genhtml_branch_coverage=1 00:23:19.903 --rc genhtml_function_coverage=1 00:23:19.903 --rc genhtml_legend=1 00:23:19.903 --rc geninfo_all_blocks=1 00:23:19.903 --rc geninfo_unexecuted_blocks=1 00:23:19.903 00:23:19.903 ' 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:19.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.903 --rc genhtml_branch_coverage=1 00:23:19.903 --rc genhtml_function_coverage=1 00:23:19.903 --rc genhtml_legend=1 00:23:19.903 --rc geninfo_all_blocks=1 00:23:19.903 --rc geninfo_unexecuted_blocks=1 00:23:19.903 00:23:19.903 ' 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:19.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.903 --rc genhtml_branch_coverage=1 00:23:19.903 --rc genhtml_function_coverage=1 00:23:19.903 --rc genhtml_legend=1 00:23:19.903 --rc geninfo_all_blocks=1 00:23:19.903 --rc geninfo_unexecuted_blocks=1 00:23:19.903 00:23:19.903 ' 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.903 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:19.904 18:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:28.084 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:28.084 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:28.084 Found net devices under 0000:31:00.0: cvl_0_0 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:28.084 Found net devices under 0000:31:00.1: cvl_0_1 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.084 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:23:28.085 00:23:28.085 --- 10.0.0.2 ping statistics --- 00:23:28.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.085 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:23:28.085 00:23:28.085 --- 10.0.0.1 ping statistics --- 00:23:28.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.085 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1316426 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1316426 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1316426 ']' 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.085 18:39:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.085 [2024-10-08 18:39:21.509416] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:23:28.085 [2024-10-08 18:39:21.509481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.085 [2024-10-08 18:39:21.600673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.085 [2024-10-08 18:39:21.696738] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.085 [2024-10-08 18:39:21.696795] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.085 [2024-10-08 18:39:21.696804] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.085 [2024-10-08 18:39:21.696811] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.085 [2024-10-08 18:39:21.696817] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.085 [2024-10-08 18:39:21.699266] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.085 [2024-10-08 18:39:21.699427] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.085 [2024-10-08 18:39:21.699591] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.085 [2024-10-08 18:39:21.699591] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.346 [2024-10-08 18:39:22.347395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.346 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.608 Malloc0 00:23:28.608 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.608 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:28.608 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.608 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.609 [2024-10-08 18:39:22.457428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.609 [ 00:23:28.609 { 00:23:28.609 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:28.609 "subtype": "Discovery", 00:23:28.609 "listen_addresses": [ 00:23:28.609 { 00:23:28.609 "trtype": "TCP", 00:23:28.609 "adrfam": "IPv4", 00:23:28.609 "traddr": "10.0.0.2", 00:23:28.609 "trsvcid": "4420" 00:23:28.609 } 00:23:28.609 ], 00:23:28.609 "allow_any_host": true, 00:23:28.609 "hosts": [] 00:23:28.609 }, 00:23:28.609 { 00:23:28.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.609 "subtype": "NVMe", 00:23:28.609 "listen_addresses": [ 00:23:28.609 { 00:23:28.609 "trtype": "TCP", 00:23:28.609 "adrfam": "IPv4", 00:23:28.609 "traddr": "10.0.0.2", 00:23:28.609 "trsvcid": "4420" 00:23:28.609 } 00:23:28.609 ], 00:23:28.609 "allow_any_host": true, 00:23:28.609 "hosts": [], 00:23:28.609 "serial_number": "SPDK00000000000001", 00:23:28.609 "model_number": "SPDK bdev Controller", 00:23:28.609 "max_namespaces": 32, 00:23:28.609 "min_cntlid": 1, 00:23:28.609 "max_cntlid": 65519, 00:23:28.609 "namespaces": [ 00:23:28.609 { 00:23:28.609 "nsid": 1, 00:23:28.609 "bdev_name": "Malloc0", 00:23:28.609 "name": "Malloc0", 00:23:28.609 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:28.609 "eui64": "ABCDEF0123456789", 00:23:28.609 "uuid": "6a9cf2da-fb82-449a-a736-c835284020ff" 00:23:28.609 } 00:23:28.609 ] 00:23:28.609 } 00:23:28.609 ] 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.609 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:28.609 [2024-10-08 18:39:22.521183] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:23:28.609 [2024-10-08 18:39:22.521230] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1316532 ] 00:23:28.609 [2024-10-08 18:39:22.560188] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:28.609 [2024-10-08 18:39:22.560252] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:28.609 [2024-10-08 18:39:22.560257] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:28.609 [2024-10-08 18:39:22.560277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:28.609 [2024-10-08 18:39:22.560288] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:28.609 [2024-10-08 18:39:22.561119] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:28.609 [2024-10-08 18:39:22.561166] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10f8620 0 00:23:28.609 [2024-10-08 18:39:22.574991] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:28.609 [2024-10-08 18:39:22.575012] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:28.609 [2024-10-08 18:39:22.575017] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:28.609 [2024-10-08 18:39:22.575021] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:28.609 [2024-10-08 18:39:22.575059] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.575066] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.575070] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f8620) 00:23:28.609 [2024-10-08 18:39:22.575086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:28.609 [2024-10-08 18:39:22.575110] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158480, cid 0, qid 0 00:23:28.609 [2024-10-08 18:39:22.582993] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.609 [2024-10-08 18:39:22.583004] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.609 [2024-10-08 18:39:22.583008] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583013] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158480) on tqpair=0x10f8620 00:23:28.609 [2024-10-08 18:39:22.583026] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:28.609 [2024-10-08 18:39:22.583035] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:28.609 [2024-10-08 18:39:22.583040] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:28.609 [2024-10-08 18:39:22.583054] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583058] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583062] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f8620) 00:23:28.609 [2024-10-08 18:39:22.583070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.609 [2024-10-08 18:39:22.583086] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158480, cid 0, qid 0 00:23:28.609 [2024-10-08 18:39:22.583274] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.609 [2024-10-08 18:39:22.583280] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.609 [2024-10-08 18:39:22.583290] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583294] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158480) on tqpair=0x10f8620 00:23:28.609 [2024-10-08 18:39:22.583300] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:28.609 [2024-10-08 18:39:22.583308] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:28.609 [2024-10-08 18:39:22.583315] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583319] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583322] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f8620) 00:23:28.609 [2024-10-08 18:39:22.583329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.609 [2024-10-08 18:39:22.583340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158480, cid 0, qid 0 00:23:28.609 [2024-10-08 18:39:22.583511] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.609 [2024-10-08 18:39:22.583517] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.609 [2024-10-08 18:39:22.583520] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583524] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158480) on tqpair=0x10f8620 00:23:28.609 [2024-10-08 18:39:22.583530] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:28.609 [2024-10-08 18:39:22.583539] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:28.609 [2024-10-08 18:39:22.583546] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583550] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583553] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f8620) 00:23:28.609 [2024-10-08 18:39:22.583560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.609 [2024-10-08 18:39:22.583570] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158480, cid 0, qid 0 00:23:28.609 [2024-10-08 18:39:22.583777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.609 [2024-10-08 18:39:22.583783] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.609 [2024-10-08 18:39:22.583786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583790] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158480) on tqpair=0x10f8620 00:23:28.609 [2024-10-08 18:39:22.583795] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:28.609 [2024-10-08 18:39:22.583805] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583809] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.583812] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f8620) 00:23:28.609 [2024-10-08 18:39:22.583819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.609 [2024-10-08 18:39:22.583829] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158480, cid 0, qid 0 00:23:28.609 [2024-10-08 18:39:22.584043] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.609 [2024-10-08 18:39:22.584050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.609 [2024-10-08 18:39:22.584053] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.609 [2024-10-08 18:39:22.584057] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158480) on tqpair=0x10f8620 00:23:28.609 [2024-10-08 18:39:22.584064] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:28.609 [2024-10-08 18:39:22.584069] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:28.610 [2024-10-08 18:39:22.584077] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:28.610 [2024-10-08 18:39:22.584183] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:28.610 [2024-10-08 18:39:22.584188] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:28.610 [2024-10-08 18:39:22.584198] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.584202] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.584205] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f8620) 00:23:28.610 [2024-10-08 18:39:22.584212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.610 [2024-10-08 18:39:22.584222] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158480, cid 0, qid 0 00:23:28.610 [2024-10-08 18:39:22.584407] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.610 [2024-10-08 18:39:22.584413] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.610 [2024-10-08 18:39:22.584416] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.584420] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158480) on tqpair=0x10f8620 00:23:28.610 [2024-10-08 18:39:22.584425] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:28.610 [2024-10-08 18:39:22.584435] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.584439] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.584442] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f8620) 00:23:28.610 [2024-10-08 18:39:22.584449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.610 [2024-10-08 18:39:22.584459] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158480, cid 0, qid 0 00:23:28.610 [2024-10-08 18:39:22.584662] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.610 [2024-10-08 18:39:22.584669] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.610 [2024-10-08 18:39:22.584672] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.584676] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158480) on tqpair=0x10f8620 00:23:28.610 [2024-10-08 18:39:22.584680] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:28.610 [2024-10-08 18:39:22.584685] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:28.610 [2024-10-08 18:39:22.584693] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:28.610 [2024-10-08 18:39:22.584701] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:28.610 [2024-10-08 18:39:22.584711] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.584714] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f8620) 00:23:28.610 [2024-10-08 18:39:22.584721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.610 [2024-10-08 18:39:22.584735] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158480, cid 0, qid 0 00:23:28.610 [2024-10-08 18:39:22.584950] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.610 [2024-10-08 18:39:22.584956] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.610 [2024-10-08 18:39:22.584960] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.584964] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10f8620): datao=0, datal=4096, cccid=0 00:23:28.610 [2024-10-08 18:39:22.584969] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1158480) on tqpair(0x10f8620): expected_datao=0, payload_size=4096 00:23:28.610 [2024-10-08 18:39:22.584982] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.584999] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.585003] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.629989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.610 [2024-10-08 18:39:22.630001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.610 [2024-10-08 18:39:22.630005] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630009] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158480) on tqpair=0x10f8620 00:23:28.610 [2024-10-08 18:39:22.630018] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:28.610 [2024-10-08 18:39:22.630024] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:28.610 [2024-10-08 18:39:22.630028] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:28.610 [2024-10-08 18:39:22.630033] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:28.610 [2024-10-08 18:39:22.630038] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:28.610 [2024-10-08 18:39:22.630043] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:28.610 [2024-10-08 18:39:22.630056] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:28.610 [2024-10-08 18:39:22.630064] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630069] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f8620) 00:23:28.610 [2024-10-08 18:39:22.630081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.610 [2024-10-08 18:39:22.630094] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158480, cid 0, qid 0 00:23:28.610 [2024-10-08 18:39:22.630294] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.610 [2024-10-08 18:39:22.630300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.610 [2024-10-08 18:39:22.630304] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630308] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158480) on tqpair=0x10f8620 00:23:28.610 [2024-10-08 18:39:22.630316] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630319] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630323] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f8620) 00:23:28.610 [2024-10-08 18:39:22.630329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.610 [2024-10-08 18:39:22.630336] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630343] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630347] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10f8620) 00:23:28.610 [2024-10-08 18:39:22.630352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.610 [2024-10-08 18:39:22.630359] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630362] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630366] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10f8620) 00:23:28.610 [2024-10-08 18:39:22.630371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.610 [2024-10-08 18:39:22.630378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630381] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630385] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f8620) 00:23:28.610 [2024-10-08 18:39:22.630391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.610 [2024-10-08 18:39:22.630395] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:28.610 [2024-10-08 18:39:22.630407] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:28.610 [2024-10-08 18:39:22.630413] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630417] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10f8620) 00:23:28.610 [2024-10-08 18:39:22.630424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.610 [2024-10-08 18:39:22.630436] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158480, cid 0, qid 0 00:23:28.610 [2024-10-08 18:39:22.630441] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158600, cid 1, qid 0 00:23:28.610 [2024-10-08 18:39:22.630446] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158780, cid 2, qid 0 00:23:28.610 [2024-10-08 18:39:22.630450] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158900, cid 3, qid 0 00:23:28.610 [2024-10-08 18:39:22.630455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158a80, cid 4, qid 0 00:23:28.610 [2024-10-08 18:39:22.630692] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.610 [2024-10-08 18:39:22.630699] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.610 [2024-10-08 18:39:22.630702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158a80) on tqpair=0x10f8620 00:23:28.610 [2024-10-08 18:39:22.630711] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:28.610 [2024-10-08 18:39:22.630716] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:28.610 [2024-10-08 18:39:22.630728] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.630732] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10f8620) 00:23:28.610 [2024-10-08 18:39:22.630738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.610 [2024-10-08 18:39:22.630748] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158a80, cid 4, qid 0 00:23:28.610 [2024-10-08 18:39:22.631009] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.610 [2024-10-08 18:39:22.631022] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.610 [2024-10-08 18:39:22.631025] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.631029] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10f8620): datao=0, datal=4096, cccid=4 00:23:28.610 [2024-10-08 18:39:22.631034] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1158a80) on tqpair(0x10f8620): expected_datao=0, payload_size=4096 00:23:28.610 [2024-10-08 18:39:22.631038] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.631045] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.631049] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.610 [2024-10-08 18:39:22.631186] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.611 [2024-10-08 18:39:22.631192] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.611 [2024-10-08 18:39:22.631196] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.611 [2024-10-08 18:39:22.631200] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158a80) on tqpair=0x10f8620 00:23:28.611 [2024-10-08 18:39:22.631213] nvme_ctrlr.c:4220:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:28.611 [2024-10-08 18:39:22.631243] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.611 [2024-10-08 18:39:22.631247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10f8620) 00:23:28.611 [2024-10-08 18:39:22.631254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.611 [2024-10-08 18:39:22.631261] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.611 [2024-10-08 18:39:22.631265] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.611 [2024-10-08 18:39:22.631268] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10f8620) 00:23:28.611 [2024-10-08 18:39:22.631275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.611 [2024-10-08 18:39:22.631287] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158a80, cid 4, qid 0 00:23:28.611 [2024-10-08 18:39:22.631292] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158c00, cid 5, qid 0 00:23:28.611 [2024-10-08 18:39:22.631505] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.611 [2024-10-08 18:39:22.631511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.611 [2024-10-08 18:39:22.631514] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.611 [2024-10-08 18:39:22.631518] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10f8620): datao=0, datal=1024, cccid=4 00:23:28.611 [2024-10-08 18:39:22.631522] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1158a80) on tqpair(0x10f8620): expected_datao=0, payload_size=1024 00:23:28.611 [2024-10-08 18:39:22.631527] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.611 [2024-10-08 18:39:22.631533] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.611 [2024-10-08 18:39:22.631537] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.611 [2024-10-08 18:39:22.631543] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.611 [2024-10-08 18:39:22.631549] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.611 [2024-10-08 18:39:22.631552] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.611 [2024-10-08 18:39:22.631556] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158c00) on tqpair=0x10f8620 00:23:28.875 [2024-10-08 18:39:22.672147] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.875 [2024-10-08 18:39:22.672161] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.875 [2024-10-08 18:39:22.672164] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.875 [2024-10-08 18:39:22.672168] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158a80) on tqpair=0x10f8620 00:23:28.875 [2024-10-08 18:39:22.672194] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.875 [2024-10-08 18:39:22.672198] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10f8620) 00:23:28.875 [2024-10-08 18:39:22.672206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.875 [2024-10-08 18:39:22.672222] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158a80, cid 4, qid 0 00:23:28.875 [2024-10-08 18:39:22.672425] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.875 [2024-10-08 18:39:22.672433] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.875 [2024-10-08 18:39:22.672437] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.875 [2024-10-08 18:39:22.672440] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10f8620): datao=0, datal=3072, cccid=4 00:23:28.875 [2024-10-08 18:39:22.672445] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1158a80) on tqpair(0x10f8620): expected_datao=0, payload_size=3072 00:23:28.875 [2024-10-08 18:39:22.672449] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.875 [2024-10-08 18:39:22.672466] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.875 [2024-10-08 18:39:22.672471] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.875 [2024-10-08 18:39:22.672641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.875 [2024-10-08 18:39:22.672647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.875 [2024-10-08 18:39:22.672650] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.875 [2024-10-08 18:39:22.672654] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158a80) on tqpair=0x10f8620 00:23:28.875 [2024-10-08 18:39:22.672663] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.875 [2024-10-08 18:39:22.672667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10f8620) 00:23:28.875 [2024-10-08 18:39:22.672673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.875 [2024-10-08 18:39:22.672687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158a80, cid 4, qid 0 00:23:28.875 [2024-10-08 18:39:22.672906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.875 [2024-10-08 18:39:22.672912] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.875 [2024-10-08 18:39:22.672916] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.875 [2024-10-08 18:39:22.672920] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10f8620): datao=0, datal=8, cccid=4 00:23:28.875 [2024-10-08 18:39:22.672924] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1158a80) on tqpair(0x10f8620): expected_datao=0, payload_size=8 00:23:28.875 [2024-10-08 18:39:22.672928] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.876 [2024-10-08 18:39:22.672935] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.876 [2024-10-08 18:39:22.672939] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.876 [2024-10-08 18:39:22.713172] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.876 [2024-10-08 18:39:22.713185] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.876 [2024-10-08 18:39:22.713188] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.876 [2024-10-08 18:39:22.713192] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158a80) on tqpair=0x10f8620 00:23:28.876 ===================================================== 00:23:28.876 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:28.876 ===================================================== 00:23:28.876 Controller Capabilities/Features 00:23:28.876 ================================ 00:23:28.876 Vendor ID: 0000 00:23:28.876 Subsystem Vendor ID: 0000 00:23:28.876 Serial Number: .................... 00:23:28.876 Model Number: ........................................ 00:23:28.876 Firmware Version: 25.01 00:23:28.876 Recommended Arb Burst: 0 00:23:28.876 IEEE OUI Identifier: 00 00 00 00:23:28.876 Multi-path I/O 00:23:28.876 May have multiple subsystem ports: No 00:23:28.876 May have multiple controllers: No 00:23:28.876 Associated with SR-IOV VF: No 00:23:28.876 Max Data Transfer Size: 131072 00:23:28.876 Max Number of Namespaces: 0 00:23:28.876 Max Number of I/O Queues: 1024 00:23:28.876 NVMe Specification Version (VS): 1.3 00:23:28.876 NVMe Specification Version (Identify): 1.3 00:23:28.876 Maximum Queue Entries: 128 00:23:28.876 Contiguous Queues Required: Yes 00:23:28.876 Arbitration Mechanisms Supported 00:23:28.876 Weighted Round Robin: Not Supported 00:23:28.876 Vendor Specific: Not Supported 00:23:28.876 Reset Timeout: 15000 ms 00:23:28.876 Doorbell Stride: 4 bytes 00:23:28.876 NVM Subsystem Reset: Not Supported 00:23:28.876 Command Sets Supported 00:23:28.876 NVM Command Set: Supported 00:23:28.876 Boot Partition: Not Supported 00:23:28.876 Memory Page Size Minimum: 4096 bytes 00:23:28.876 Memory Page Size Maximum: 4096 bytes 00:23:28.876 Persistent Memory Region: Not Supported 00:23:28.876 Optional Asynchronous Events Supported 00:23:28.876 Namespace Attribute Notices: Not Supported 00:23:28.876 Firmware Activation Notices: Not Supported 00:23:28.876 ANA Change Notices: Not Supported 00:23:28.876 PLE Aggregate Log Change Notices: Not Supported 00:23:28.876 LBA Status Info Alert Notices: Not Supported 00:23:28.876 EGE Aggregate Log Change Notices: Not Supported 00:23:28.876 Normal NVM Subsystem Shutdown event: Not Supported 00:23:28.876 Zone Descriptor Change Notices: Not Supported 00:23:28.876 Discovery Log Change Notices: Supported 00:23:28.876 Controller Attributes 00:23:28.876 128-bit Host Identifier: Not Supported 00:23:28.876 Non-Operational Permissive Mode: Not Supported 00:23:28.876 NVM Sets: Not Supported 00:23:28.876 Read Recovery Levels: Not Supported 00:23:28.876 Endurance Groups: Not Supported 00:23:28.876 Predictable Latency Mode: Not Supported 00:23:28.876 Traffic Based Keep ALive: Not Supported 00:23:28.876 Namespace Granularity: Not Supported 00:23:28.876 SQ Associations: Not Supported 00:23:28.876 UUID List: Not Supported 00:23:28.876 Multi-Domain Subsystem: Not Supported 00:23:28.876 Fixed Capacity Management: Not Supported 00:23:28.876 Variable Capacity Management: Not Supported 00:23:28.876 Delete Endurance Group: Not Supported 00:23:28.876 Delete NVM Set: Not Supported 00:23:28.876 Extended LBA Formats Supported: Not Supported 00:23:28.876 Flexible Data Placement Supported: Not Supported 00:23:28.876 00:23:28.876 Controller Memory Buffer Support 00:23:28.876 ================================ 00:23:28.876 Supported: No 00:23:28.876 00:23:28.876 Persistent Memory Region Support 00:23:28.876 ================================ 00:23:28.876 Supported: No 00:23:28.876 00:23:28.876 Admin Command Set Attributes 00:23:28.876 ============================ 00:23:28.876 Security Send/Receive: Not Supported 00:23:28.876 Format NVM: Not Supported 00:23:28.876 Firmware Activate/Download: Not Supported 00:23:28.876 Namespace Management: Not Supported 00:23:28.876 Device Self-Test: Not Supported 00:23:28.876 Directives: Not Supported 00:23:28.876 NVMe-MI: Not Supported 00:23:28.876 Virtualization Management: Not Supported 00:23:28.876 Doorbell Buffer Config: Not Supported 00:23:28.876 Get LBA Status Capability: Not Supported 00:23:28.876 Command & Feature Lockdown Capability: Not Supported 00:23:28.876 Abort Command Limit: 1 00:23:28.876 Async Event Request Limit: 4 00:23:28.876 Number of Firmware Slots: N/A 00:23:28.876 Firmware Slot 1 Read-Only: N/A 00:23:28.876 Firmware Activation Without Reset: N/A 00:23:28.876 Multiple Update Detection Support: N/A 00:23:28.876 Firmware Update Granularity: No Information Provided 00:23:28.876 Per-Namespace SMART Log: No 00:23:28.876 Asymmetric Namespace Access Log Page: Not Supported 00:23:28.876 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:28.876 Command Effects Log Page: Not Supported 00:23:28.876 Get Log Page Extended Data: Supported 00:23:28.876 Telemetry Log Pages: Not Supported 00:23:28.876 Persistent Event Log Pages: Not Supported 00:23:28.876 Supported Log Pages Log Page: May Support 00:23:28.876 Commands Supported & Effects Log Page: Not Supported 00:23:28.876 Feature Identifiers & Effects Log Page:May Support 00:23:28.876 NVMe-MI Commands & Effects Log Page: May Support 00:23:28.876 Data Area 4 for Telemetry Log: Not Supported 00:23:28.876 Error Log Page Entries Supported: 128 00:23:28.876 Keep Alive: Not Supported 00:23:28.876 00:23:28.876 NVM Command Set Attributes 00:23:28.876 ========================== 00:23:28.876 Submission Queue Entry Size 00:23:28.876 Max: 1 00:23:28.876 Min: 1 00:23:28.876 Completion Queue Entry Size 00:23:28.876 Max: 1 00:23:28.876 Min: 1 00:23:28.876 Number of Namespaces: 0 00:23:28.876 Compare Command: Not Supported 00:23:28.876 Write Uncorrectable Command: Not Supported 00:23:28.876 Dataset Management Command: Not Supported 00:23:28.876 Write Zeroes Command: Not Supported 00:23:28.876 Set Features Save Field: Not Supported 00:23:28.876 Reservations: Not Supported 00:23:28.876 Timestamp: Not Supported 00:23:28.876 Copy: Not Supported 00:23:28.876 Volatile Write Cache: Not Present 00:23:28.876 Atomic Write Unit (Normal): 1 00:23:28.876 Atomic Write Unit (PFail): 1 00:23:28.876 Atomic Compare & Write Unit: 1 00:23:28.876 Fused Compare & Write: Supported 00:23:28.876 Scatter-Gather List 00:23:28.876 SGL Command Set: Supported 00:23:28.876 SGL Keyed: Supported 00:23:28.876 SGL Bit Bucket Descriptor: Not Supported 00:23:28.876 SGL Metadata Pointer: Not Supported 00:23:28.876 Oversized SGL: Not Supported 00:23:28.876 SGL Metadata Address: Not Supported 00:23:28.876 SGL Offset: Supported 00:23:28.876 Transport SGL Data Block: Not Supported 00:23:28.876 Replay Protected Memory Block: Not Supported 00:23:28.876 00:23:28.876 Firmware Slot Information 00:23:28.876 ========================= 00:23:28.876 Active slot: 0 00:23:28.876 00:23:28.876 00:23:28.876 Error Log 00:23:28.876 ========= 00:23:28.876 00:23:28.876 Active Namespaces 00:23:28.876 ================= 00:23:28.876 Discovery Log Page 00:23:28.876 ================== 00:23:28.876 Generation Counter: 2 00:23:28.876 Number of Records: 2 00:23:28.876 Record Format: 0 00:23:28.876 00:23:28.876 Discovery Log Entry 0 00:23:28.876 ---------------------- 00:23:28.876 Transport Type: 3 (TCP) 00:23:28.876 Address Family: 1 (IPv4) 00:23:28.876 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:28.876 Entry Flags: 00:23:28.876 Duplicate Returned Information: 1 00:23:28.876 Explicit Persistent Connection Support for Discovery: 1 00:23:28.876 Transport Requirements: 00:23:28.876 Secure Channel: Not Required 00:23:28.876 Port ID: 0 (0x0000) 00:23:28.876 Controller ID: 65535 (0xffff) 00:23:28.876 Admin Max SQ Size: 128 00:23:28.876 Transport Service Identifier: 4420 00:23:28.876 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:28.876 Transport Address: 10.0.0.2 00:23:28.876 Discovery Log Entry 1 00:23:28.876 ---------------------- 00:23:28.876 Transport Type: 3 (TCP) 00:23:28.876 Address Family: 1 (IPv4) 00:23:28.877 Subsystem Type: 2 (NVM Subsystem) 00:23:28.877 Entry Flags: 00:23:28.877 Duplicate Returned Information: 0 00:23:28.877 Explicit Persistent Connection Support for Discovery: 0 00:23:28.877 Transport Requirements: 00:23:28.877 Secure Channel: Not Required 00:23:28.877 Port ID: 0 (0x0000) 00:23:28.877 Controller ID: 65535 (0xffff) 00:23:28.877 Admin Max SQ Size: 128 00:23:28.877 Transport Service Identifier: 4420 00:23:28.877 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:28.877 Transport Address: 10.0.0.2 [2024-10-08 18:39:22.713288] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:28.877 [2024-10-08 18:39:22.713300] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158480) on tqpair=0x10f8620 00:23:28.877 [2024-10-08 18:39:22.713308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.877 [2024-10-08 18:39:22.713313] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158600) on tqpair=0x10f8620 00:23:28.877 [2024-10-08 18:39:22.713320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.877 [2024-10-08 18:39:22.713325] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158780) on tqpair=0x10f8620 00:23:28.877 [2024-10-08 18:39:22.713330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.877 [2024-10-08 18:39:22.713335] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158900) on tqpair=0x10f8620 00:23:28.877 [2024-10-08 18:39:22.713339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.877 [2024-10-08 18:39:22.713349] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.713354] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.713357] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f8620) 00:23:28.877 [2024-10-08 18:39:22.713365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.877 [2024-10-08 18:39:22.713380] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158900, cid 3, qid 0 00:23:28.877 [2024-10-08 18:39:22.713491] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.877 [2024-10-08 18:39:22.713498] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.877 [2024-10-08 18:39:22.713502] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.713506] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158900) on tqpair=0x10f8620 00:23:28.877 [2024-10-08 18:39:22.713513] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.713517] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.713521] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f8620) 00:23:28.877 [2024-10-08 18:39:22.713528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.877 [2024-10-08 18:39:22.713541] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158900, cid 3, qid 0 00:23:28.877 [2024-10-08 18:39:22.713741] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.877 [2024-10-08 18:39:22.713748] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.877 [2024-10-08 18:39:22.713751] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.713755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158900) on tqpair=0x10f8620 00:23:28.877 [2024-10-08 18:39:22.713760] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:28.877 [2024-10-08 18:39:22.713765] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:28.877 [2024-10-08 18:39:22.713778] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.713782] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.713786] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f8620) 00:23:28.877 [2024-10-08 18:39:22.713793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.877 [2024-10-08 18:39:22.713803] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158900, cid 3, qid 0 00:23:28.877 [2024-10-08 18:39:22.717987] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.877 [2024-10-08 18:39:22.717995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.877 [2024-10-08 18:39:22.717999] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.718003] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158900) on tqpair=0x10f8620 00:23:28.877 [2024-10-08 18:39:22.718017] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.718021] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.718025] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f8620) 00:23:28.877 [2024-10-08 18:39:22.718032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.877 [2024-10-08 18:39:22.718044] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1158900, cid 3, qid 0 00:23:28.877 [2024-10-08 18:39:22.718232] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.877 [2024-10-08 18:39:22.718239] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.877 [2024-10-08 18:39:22.718243] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.718247] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1158900) on tqpair=0x10f8620 00:23:28.877 [2024-10-08 18:39:22.718254] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:23:28.877 00:23:28.877 18:39:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:28.877 [2024-10-08 18:39:22.767699] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:23:28.877 [2024-10-08 18:39:22.767778] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1316636 ] 00:23:28.877 [2024-10-08 18:39:22.806914] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:28.877 [2024-10-08 18:39:22.806970] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:28.877 [2024-10-08 18:39:22.806986] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:28.877 [2024-10-08 18:39:22.807008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:28.877 [2024-10-08 18:39:22.807019] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:28.877 [2024-10-08 18:39:22.811272] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:28.877 [2024-10-08 18:39:22.811312] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2379620 0 00:23:28.877 [2024-10-08 18:39:22.818995] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:28.877 [2024-10-08 18:39:22.819016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:28.877 [2024-10-08 18:39:22.819020] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:28.877 [2024-10-08 18:39:22.819024] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:28.877 [2024-10-08 18:39:22.819057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.819063] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.819067] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379620) 00:23:28.877 [2024-10-08 18:39:22.819082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:28.877 [2024-10-08 18:39:22.819103] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9480, cid 0, qid 0 00:23:28.877 [2024-10-08 18:39:22.826996] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.877 [2024-10-08 18:39:22.827006] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.877 [2024-10-08 18:39:22.827015] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.827020] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9480) on tqpair=0x2379620 00:23:28.877 [2024-10-08 18:39:22.827033] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:28.877 [2024-10-08 18:39:22.827040] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:28.877 [2024-10-08 18:39:22.827046] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:28.877 [2024-10-08 18:39:22.827058] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.827062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.827066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379620) 00:23:28.877 [2024-10-08 18:39:22.827074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.877 [2024-10-08 18:39:22.827089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9480, cid 0, qid 0 00:23:28.877 [2024-10-08 18:39:22.827316] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.877 [2024-10-08 18:39:22.827322] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.877 [2024-10-08 18:39:22.827326] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.827330] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9480) on tqpair=0x2379620 00:23:28.877 [2024-10-08 18:39:22.827335] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:28.877 [2024-10-08 18:39:22.827342] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:28.877 [2024-10-08 18:39:22.827349] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.827353] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.877 [2024-10-08 18:39:22.827357] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379620) 00:23:28.877 [2024-10-08 18:39:22.827364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.877 [2024-10-08 18:39:22.827374] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9480, cid 0, qid 0 00:23:28.877 [2024-10-08 18:39:22.827591] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.877 [2024-10-08 18:39:22.827598] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.877 [2024-10-08 18:39:22.827601] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.827605] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9480) on tqpair=0x2379620 00:23:28.878 [2024-10-08 18:39:22.827611] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:28.878 [2024-10-08 18:39:22.827620] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:28.878 [2024-10-08 18:39:22.827627] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.827631] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.827634] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.827641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.878 [2024-10-08 18:39:22.827651] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9480, cid 0, qid 0 00:23:28.878 [2024-10-08 18:39:22.827864] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.878 [2024-10-08 18:39:22.827871] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.878 [2024-10-08 18:39:22.827875] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.827882] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9480) on tqpair=0x2379620 00:23:28.878 [2024-10-08 18:39:22.827887] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:28.878 [2024-10-08 18:39:22.827897] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.827901] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.827904] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.827911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.878 [2024-10-08 18:39:22.827921] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9480, cid 0, qid 0 00:23:28.878 [2024-10-08 18:39:22.828106] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.878 [2024-10-08 18:39:22.828112] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.878 [2024-10-08 18:39:22.828116] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.828120] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9480) on tqpair=0x2379620 00:23:28.878 [2024-10-08 18:39:22.828124] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:28.878 [2024-10-08 18:39:22.828129] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:28.878 [2024-10-08 18:39:22.828137] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:28.878 [2024-10-08 18:39:22.828243] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:28.878 [2024-10-08 18:39:22.828247] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:28.878 [2024-10-08 18:39:22.828256] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.828260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.828263] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.828270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.878 [2024-10-08 18:39:22.828280] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9480, cid 0, qid 0 00:23:28.878 [2024-10-08 18:39:22.828455] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.878 [2024-10-08 18:39:22.828461] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.878 [2024-10-08 18:39:22.828465] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.828469] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9480) on tqpair=0x2379620 00:23:28.878 [2024-10-08 18:39:22.828473] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:28.878 [2024-10-08 18:39:22.828483] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.828487] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.828490] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.828497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.878 [2024-10-08 18:39:22.828507] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9480, cid 0, qid 0 00:23:28.878 [2024-10-08 18:39:22.828715] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.878 [2024-10-08 18:39:22.828722] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.878 [2024-10-08 18:39:22.828728] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.828732] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9480) on tqpair=0x2379620 00:23:28.878 [2024-10-08 18:39:22.828736] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:28.878 [2024-10-08 18:39:22.828741] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:28.878 [2024-10-08 18:39:22.828749] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:28.878 [2024-10-08 18:39:22.828757] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:28.878 [2024-10-08 18:39:22.828767] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.828770] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.828777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.878 [2024-10-08 18:39:22.828788] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9480, cid 0, qid 0 00:23:28.878 [2024-10-08 18:39:22.829040] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.878 [2024-10-08 18:39:22.829048] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.878 [2024-10-08 18:39:22.829051] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.829056] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379620): datao=0, datal=4096, cccid=0 00:23:28.878 [2024-10-08 18:39:22.829060] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d9480) on tqpair(0x2379620): expected_datao=0, payload_size=4096 00:23:28.878 [2024-10-08 18:39:22.829065] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.829078] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.829082] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870173] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.878 [2024-10-08 18:39:22.870185] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.878 [2024-10-08 18:39:22.870188] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870192] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9480) on tqpair=0x2379620 00:23:28.878 [2024-10-08 18:39:22.870202] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:28.878 [2024-10-08 18:39:22.870207] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:28.878 [2024-10-08 18:39:22.870212] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:28.878 [2024-10-08 18:39:22.870216] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:28.878 [2024-10-08 18:39:22.870221] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:28.878 [2024-10-08 18:39:22.870226] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:28.878 [2024-10-08 18:39:22.870239] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:28.878 [2024-10-08 18:39:22.870247] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870251] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870255] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.870263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.878 [2024-10-08 18:39:22.870279] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9480, cid 0, qid 0 00:23:28.878 [2024-10-08 18:39:22.870499] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.878 [2024-10-08 18:39:22.870507] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.878 [2024-10-08 18:39:22.870511] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870515] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9480) on tqpair=0x2379620 00:23:28.878 [2024-10-08 18:39:22.870522] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870526] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870529] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.870535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.878 [2024-10-08 18:39:22.870542] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870546] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870549] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.870555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.878 [2024-10-08 18:39:22.870561] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870565] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870569] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.870574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.878 [2024-10-08 18:39:22.870581] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870584] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870588] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.870594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.878 [2024-10-08 18:39:22.870598] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:28.878 [2024-10-08 18:39:22.870610] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:28.878 [2024-10-08 18:39:22.870616] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.878 [2024-10-08 18:39:22.870620] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379620) 00:23:28.878 [2024-10-08 18:39:22.870627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.879 [2024-10-08 18:39:22.870639] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9480, cid 0, qid 0 00:23:28.879 [2024-10-08 18:39:22.870645] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9600, cid 1, qid 0 00:23:28.879 [2024-10-08 18:39:22.870649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9780, cid 2, qid 0 00:23:28.879 [2024-10-08 18:39:22.870654] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:28.879 [2024-10-08 18:39:22.870659] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9a80, cid 4, qid 0 00:23:28.879 [2024-10-08 18:39:22.870905] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.879 [2024-10-08 18:39:22.870911] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.879 [2024-10-08 18:39:22.870915] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.870921] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9a80) on tqpair=0x2379620 00:23:28.879 [2024-10-08 18:39:22.870927] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:28.879 [2024-10-08 18:39:22.870932] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:28.879 [2024-10-08 18:39:22.870940] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:28.879 [2024-10-08 18:39:22.870949] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:28.879 [2024-10-08 18:39:22.870955] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.870959] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.870963] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379620) 00:23:28.879 [2024-10-08 18:39:22.870969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.879 [2024-10-08 18:39:22.870990] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9a80, cid 4, qid 0 00:23:28.879 [2024-10-08 18:39:22.871203] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.879 [2024-10-08 18:39:22.871209] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.879 [2024-10-08 18:39:22.871213] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.871216] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9a80) on tqpair=0x2379620 00:23:28.879 [2024-10-08 18:39:22.871281] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:28.879 [2024-10-08 18:39:22.871292] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:28.879 [2024-10-08 18:39:22.871299] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.871303] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379620) 00:23:28.879 [2024-10-08 18:39:22.871310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.879 [2024-10-08 18:39:22.871320] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9a80, cid 4, qid 0 00:23:28.879 [2024-10-08 18:39:22.871517] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.879 [2024-10-08 18:39:22.871523] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.879 [2024-10-08 18:39:22.871527] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.871531] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379620): datao=0, datal=4096, cccid=4 00:23:28.879 [2024-10-08 18:39:22.871535] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d9a80) on tqpair(0x2379620): expected_datao=0, payload_size=4096 00:23:28.879 [2024-10-08 18:39:22.871540] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.871554] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.871558] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.914987] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.879 [2024-10-08 18:39:22.914998] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.879 [2024-10-08 18:39:22.915002] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.915006] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9a80) on tqpair=0x2379620 00:23:28.879 [2024-10-08 18:39:22.915021] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:28.879 [2024-10-08 18:39:22.915033] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:28.879 [2024-10-08 18:39:22.915043] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:28.879 [2024-10-08 18:39:22.915050] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.915054] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379620) 00:23:28.879 [2024-10-08 18:39:22.915061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.879 [2024-10-08 18:39:22.915074] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9a80, cid 4, qid 0 00:23:28.879 [2024-10-08 18:39:22.915293] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.879 [2024-10-08 18:39:22.915301] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.879 [2024-10-08 18:39:22.915305] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.915309] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379620): datao=0, datal=4096, cccid=4 00:23:28.879 [2024-10-08 18:39:22.915313] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d9a80) on tqpair(0x2379620): expected_datao=0, payload_size=4096 00:23:28.879 [2024-10-08 18:39:22.915318] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.915331] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.879 [2024-10-08 18:39:22.915336] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.956132] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.143 [2024-10-08 18:39:22.956143] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.143 [2024-10-08 18:39:22.956147] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.956151] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9a80) on tqpair=0x2379620 00:23:29.143 [2024-10-08 18:39:22.956170] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:29.143 [2024-10-08 18:39:22.956180] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:29.143 [2024-10-08 18:39:22.956188] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.956192] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379620) 00:23:29.143 [2024-10-08 18:39:22.956199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.143 [2024-10-08 18:39:22.956211] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9a80, cid 4, qid 0 00:23:29.143 [2024-10-08 18:39:22.956479] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.143 [2024-10-08 18:39:22.956485] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.143 [2024-10-08 18:39:22.956489] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.956492] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379620): datao=0, datal=4096, cccid=4 00:23:29.143 [2024-10-08 18:39:22.956497] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d9a80) on tqpair(0x2379620): expected_datao=0, payload_size=4096 00:23:29.143 [2024-10-08 18:39:22.956501] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.956515] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.956519] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997168] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.143 [2024-10-08 18:39:22.997178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.143 [2024-10-08 18:39:22.997186] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997190] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9a80) on tqpair=0x2379620 00:23:29.143 [2024-10-08 18:39:22.997199] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:29.143 [2024-10-08 18:39:22.997207] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:29.143 [2024-10-08 18:39:22.997217] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:29.143 [2024-10-08 18:39:22.997224] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:29.143 [2024-10-08 18:39:22.997229] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:29.143 [2024-10-08 18:39:22.997234] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:29.143 [2024-10-08 18:39:22.997240] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:29.143 [2024-10-08 18:39:22.997244] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:29.143 [2024-10-08 18:39:22.997250] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:29.143 [2024-10-08 18:39:22.997267] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997271] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379620) 00:23:29.143 [2024-10-08 18:39:22.997278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.143 [2024-10-08 18:39:22.997285] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997289] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997293] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2379620) 00:23:29.143 [2024-10-08 18:39:22.997299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.143 [2024-10-08 18:39:22.997312] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9a80, cid 4, qid 0 00:23:29.143 [2024-10-08 18:39:22.997317] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9c00, cid 5, qid 0 00:23:29.143 [2024-10-08 18:39:22.997452] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.143 [2024-10-08 18:39:22.997458] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.143 [2024-10-08 18:39:22.997461] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997465] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9a80) on tqpair=0x2379620 00:23:29.143 [2024-10-08 18:39:22.997472] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.143 [2024-10-08 18:39:22.997478] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.143 [2024-10-08 18:39:22.997481] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9c00) on tqpair=0x2379620 00:23:29.143 [2024-10-08 18:39:22.997495] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997498] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2379620) 00:23:29.143 [2024-10-08 18:39:22.997505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.143 [2024-10-08 18:39:22.997515] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9c00, cid 5, qid 0 00:23:29.143 [2024-10-08 18:39:22.997711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.143 [2024-10-08 18:39:22.997717] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.143 [2024-10-08 18:39:22.997721] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997725] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9c00) on tqpair=0x2379620 00:23:29.143 [2024-10-08 18:39:22.997734] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997738] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2379620) 00:23:29.143 [2024-10-08 18:39:22.997744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.143 [2024-10-08 18:39:22.997754] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9c00, cid 5, qid 0 00:23:29.143 [2024-10-08 18:39:22.997957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.143 [2024-10-08 18:39:22.997964] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.143 [2024-10-08 18:39:22.997967] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:22.997971] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9c00) on tqpair=0x2379620 00:23:29.143 [2024-10-08 18:39:23.001993] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:23.001998] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2379620) 00:23:29.143 [2024-10-08 18:39:23.002005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.143 [2024-10-08 18:39:23.002016] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9c00, cid 5, qid 0 00:23:29.143 [2024-10-08 18:39:23.002219] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.143 [2024-10-08 18:39:23.002227] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.143 [2024-10-08 18:39:23.002230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.143 [2024-10-08 18:39:23.002234] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9c00) on tqpair=0x2379620 00:23:29.143 [2024-10-08 18:39:23.002252] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002257] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2379620) 00:23:29.144 [2024-10-08 18:39:23.002264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.144 [2024-10-08 18:39:23.002271] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002275] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379620) 00:23:29.144 [2024-10-08 18:39:23.002281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.144 [2024-10-08 18:39:23.002289] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002293] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2379620) 00:23:29.144 [2024-10-08 18:39:23.002299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.144 [2024-10-08 18:39:23.002307] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002311] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2379620) 00:23:29.144 [2024-10-08 18:39:23.002317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.144 [2024-10-08 18:39:23.002328] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9c00, cid 5, qid 0 00:23:29.144 [2024-10-08 18:39:23.002336] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9a80, cid 4, qid 0 00:23:29.144 [2024-10-08 18:39:23.002341] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9d80, cid 6, qid 0 00:23:29.144 [2024-10-08 18:39:23.002345] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9f00, cid 7, qid 0 00:23:29.144 [2024-10-08 18:39:23.002660] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.144 [2024-10-08 18:39:23.002667] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.144 [2024-10-08 18:39:23.002670] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002674] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379620): datao=0, datal=8192, cccid=5 00:23:29.144 [2024-10-08 18:39:23.002679] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d9c00) on tqpair(0x2379620): expected_datao=0, payload_size=8192 00:23:29.144 [2024-10-08 18:39:23.002683] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002774] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002778] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002784] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.144 [2024-10-08 18:39:23.002790] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.144 [2024-10-08 18:39:23.002793] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002797] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379620): datao=0, datal=512, cccid=4 00:23:29.144 [2024-10-08 18:39:23.002801] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d9a80) on tqpair(0x2379620): expected_datao=0, payload_size=512 00:23:29.144 [2024-10-08 18:39:23.002806] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002812] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002816] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002821] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.144 [2024-10-08 18:39:23.002827] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.144 [2024-10-08 18:39:23.002830] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002834] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379620): datao=0, datal=512, cccid=6 00:23:29.144 [2024-10-08 18:39:23.002838] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d9d80) on tqpair(0x2379620): expected_datao=0, payload_size=512 00:23:29.144 [2024-10-08 18:39:23.002843] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002849] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002853] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.144 [2024-10-08 18:39:23.002864] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.144 [2024-10-08 18:39:23.002868] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002871] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379620): datao=0, datal=4096, cccid=7 00:23:29.144 [2024-10-08 18:39:23.002875] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23d9f00) on tqpair(0x2379620): expected_datao=0, payload_size=4096 00:23:29.144 [2024-10-08 18:39:23.002880] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002887] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002890] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002905] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.144 [2024-10-08 18:39:23.002911] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.144 [2024-10-08 18:39:23.002914] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002920] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9c00) on tqpair=0x2379620 00:23:29.144 [2024-10-08 18:39:23.002933] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.144 [2024-10-08 18:39:23.002939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.144 [2024-10-08 18:39:23.002943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9a80) on tqpair=0x2379620 00:23:29.144 [2024-10-08 18:39:23.002957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.144 [2024-10-08 18:39:23.002963] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.144 [2024-10-08 18:39:23.002966] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.002970] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9d80) on tqpair=0x2379620 00:23:29.144 [2024-10-08 18:39:23.002988] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.144 [2024-10-08 18:39:23.002994] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.144 [2024-10-08 18:39:23.002997] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.144 [2024-10-08 18:39:23.003001] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9f00) on tqpair=0x2379620 00:23:29.144 ===================================================== 00:23:29.144 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.144 ===================================================== 00:23:29.144 Controller Capabilities/Features 00:23:29.144 ================================ 00:23:29.144 Vendor ID: 8086 00:23:29.144 Subsystem Vendor ID: 8086 00:23:29.144 Serial Number: SPDK00000000000001 00:23:29.144 Model Number: SPDK bdev Controller 00:23:29.144 Firmware Version: 25.01 00:23:29.144 Recommended Arb Burst: 6 00:23:29.144 IEEE OUI Identifier: e4 d2 5c 00:23:29.144 Multi-path I/O 00:23:29.144 May have multiple subsystem ports: Yes 00:23:29.144 May have multiple controllers: Yes 00:23:29.144 Associated with SR-IOV VF: No 00:23:29.144 Max Data Transfer Size: 131072 00:23:29.144 Max Number of Namespaces: 32 00:23:29.144 Max Number of I/O Queues: 127 00:23:29.144 NVMe Specification Version (VS): 1.3 00:23:29.144 NVMe Specification Version (Identify): 1.3 00:23:29.144 Maximum Queue Entries: 128 00:23:29.144 Contiguous Queues Required: Yes 00:23:29.144 Arbitration Mechanisms Supported 00:23:29.144 Weighted Round Robin: Not Supported 00:23:29.144 Vendor Specific: Not Supported 00:23:29.144 Reset Timeout: 15000 ms 00:23:29.144 Doorbell Stride: 4 bytes 00:23:29.144 NVM Subsystem Reset: Not Supported 00:23:29.144 Command Sets Supported 00:23:29.144 NVM Command Set: Supported 00:23:29.144 Boot Partition: Not Supported 00:23:29.144 Memory Page Size Minimum: 4096 bytes 00:23:29.144 Memory Page Size Maximum: 4096 bytes 00:23:29.144 Persistent Memory Region: Not Supported 00:23:29.144 Optional Asynchronous Events Supported 00:23:29.144 Namespace Attribute Notices: Supported 00:23:29.144 Firmware Activation Notices: Not Supported 00:23:29.144 ANA Change Notices: Not Supported 00:23:29.144 PLE Aggregate Log Change Notices: Not Supported 00:23:29.144 LBA Status Info Alert Notices: Not Supported 00:23:29.144 EGE Aggregate Log Change Notices: Not Supported 00:23:29.144 Normal NVM Subsystem Shutdown event: Not Supported 00:23:29.144 Zone Descriptor Change Notices: Not Supported 00:23:29.144 Discovery Log Change Notices: Not Supported 00:23:29.144 Controller Attributes 00:23:29.144 128-bit Host Identifier: Supported 00:23:29.144 Non-Operational Permissive Mode: Not Supported 00:23:29.144 NVM Sets: Not Supported 00:23:29.144 Read Recovery Levels: Not Supported 00:23:29.144 Endurance Groups: Not Supported 00:23:29.144 Predictable Latency Mode: Not Supported 00:23:29.144 Traffic Based Keep ALive: Not Supported 00:23:29.144 Namespace Granularity: Not Supported 00:23:29.144 SQ Associations: Not Supported 00:23:29.144 UUID List: Not Supported 00:23:29.144 Multi-Domain Subsystem: Not Supported 00:23:29.144 Fixed Capacity Management: Not Supported 00:23:29.144 Variable Capacity Management: Not Supported 00:23:29.144 Delete Endurance Group: Not Supported 00:23:29.144 Delete NVM Set: Not Supported 00:23:29.144 Extended LBA Formats Supported: Not Supported 00:23:29.144 Flexible Data Placement Supported: Not Supported 00:23:29.144 00:23:29.144 Controller Memory Buffer Support 00:23:29.144 ================================ 00:23:29.144 Supported: No 00:23:29.144 00:23:29.144 Persistent Memory Region Support 00:23:29.144 ================================ 00:23:29.144 Supported: No 00:23:29.144 00:23:29.144 Admin Command Set Attributes 00:23:29.144 ============================ 00:23:29.144 Security Send/Receive: Not Supported 00:23:29.144 Format NVM: Not Supported 00:23:29.144 Firmware Activate/Download: Not Supported 00:23:29.144 Namespace Management: Not Supported 00:23:29.144 Device Self-Test: Not Supported 00:23:29.144 Directives: Not Supported 00:23:29.144 NVMe-MI: Not Supported 00:23:29.144 Virtualization Management: Not Supported 00:23:29.144 Doorbell Buffer Config: Not Supported 00:23:29.145 Get LBA Status Capability: Not Supported 00:23:29.145 Command & Feature Lockdown Capability: Not Supported 00:23:29.145 Abort Command Limit: 4 00:23:29.145 Async Event Request Limit: 4 00:23:29.145 Number of Firmware Slots: N/A 00:23:29.145 Firmware Slot 1 Read-Only: N/A 00:23:29.145 Firmware Activation Without Reset: N/A 00:23:29.145 Multiple Update Detection Support: N/A 00:23:29.145 Firmware Update Granularity: No Information Provided 00:23:29.145 Per-Namespace SMART Log: No 00:23:29.145 Asymmetric Namespace Access Log Page: Not Supported 00:23:29.145 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:29.145 Command Effects Log Page: Supported 00:23:29.145 Get Log Page Extended Data: Supported 00:23:29.145 Telemetry Log Pages: Not Supported 00:23:29.145 Persistent Event Log Pages: Not Supported 00:23:29.145 Supported Log Pages Log Page: May Support 00:23:29.145 Commands Supported & Effects Log Page: Not Supported 00:23:29.145 Feature Identifiers & Effects Log Page:May Support 00:23:29.145 NVMe-MI Commands & Effects Log Page: May Support 00:23:29.145 Data Area 4 for Telemetry Log: Not Supported 00:23:29.145 Error Log Page Entries Supported: 128 00:23:29.145 Keep Alive: Supported 00:23:29.145 Keep Alive Granularity: 10000 ms 00:23:29.145 00:23:29.145 NVM Command Set Attributes 00:23:29.145 ========================== 00:23:29.145 Submission Queue Entry Size 00:23:29.145 Max: 64 00:23:29.145 Min: 64 00:23:29.145 Completion Queue Entry Size 00:23:29.145 Max: 16 00:23:29.145 Min: 16 00:23:29.145 Number of Namespaces: 32 00:23:29.145 Compare Command: Supported 00:23:29.145 Write Uncorrectable Command: Not Supported 00:23:29.145 Dataset Management Command: Supported 00:23:29.145 Write Zeroes Command: Supported 00:23:29.145 Set Features Save Field: Not Supported 00:23:29.145 Reservations: Supported 00:23:29.145 Timestamp: Not Supported 00:23:29.145 Copy: Supported 00:23:29.145 Volatile Write Cache: Present 00:23:29.145 Atomic Write Unit (Normal): 1 00:23:29.145 Atomic Write Unit (PFail): 1 00:23:29.145 Atomic Compare & Write Unit: 1 00:23:29.145 Fused Compare & Write: Supported 00:23:29.145 Scatter-Gather List 00:23:29.145 SGL Command Set: Supported 00:23:29.145 SGL Keyed: Supported 00:23:29.145 SGL Bit Bucket Descriptor: Not Supported 00:23:29.145 SGL Metadata Pointer: Not Supported 00:23:29.145 Oversized SGL: Not Supported 00:23:29.145 SGL Metadata Address: Not Supported 00:23:29.145 SGL Offset: Supported 00:23:29.145 Transport SGL Data Block: Not Supported 00:23:29.145 Replay Protected Memory Block: Not Supported 00:23:29.145 00:23:29.145 Firmware Slot Information 00:23:29.145 ========================= 00:23:29.145 Active slot: 1 00:23:29.145 Slot 1 Firmware Revision: 25.01 00:23:29.145 00:23:29.145 00:23:29.145 Commands Supported and Effects 00:23:29.145 ============================== 00:23:29.145 Admin Commands 00:23:29.145 -------------- 00:23:29.145 Get Log Page (02h): Supported 00:23:29.145 Identify (06h): Supported 00:23:29.145 Abort (08h): Supported 00:23:29.145 Set Features (09h): Supported 00:23:29.145 Get Features (0Ah): Supported 00:23:29.145 Asynchronous Event Request (0Ch): Supported 00:23:29.145 Keep Alive (18h): Supported 00:23:29.145 I/O Commands 00:23:29.145 ------------ 00:23:29.145 Flush (00h): Supported LBA-Change 00:23:29.145 Write (01h): Supported LBA-Change 00:23:29.145 Read (02h): Supported 00:23:29.145 Compare (05h): Supported 00:23:29.145 Write Zeroes (08h): Supported LBA-Change 00:23:29.145 Dataset Management (09h): Supported LBA-Change 00:23:29.145 Copy (19h): Supported LBA-Change 00:23:29.145 00:23:29.145 Error Log 00:23:29.145 ========= 00:23:29.145 00:23:29.145 Arbitration 00:23:29.145 =========== 00:23:29.145 Arbitration Burst: 1 00:23:29.145 00:23:29.145 Power Management 00:23:29.145 ================ 00:23:29.145 Number of Power States: 1 00:23:29.145 Current Power State: Power State #0 00:23:29.145 Power State #0: 00:23:29.145 Max Power: 0.00 W 00:23:29.145 Non-Operational State: Operational 00:23:29.145 Entry Latency: Not Reported 00:23:29.145 Exit Latency: Not Reported 00:23:29.145 Relative Read Throughput: 0 00:23:29.145 Relative Read Latency: 0 00:23:29.145 Relative Write Throughput: 0 00:23:29.145 Relative Write Latency: 0 00:23:29.145 Idle Power: Not Reported 00:23:29.145 Active Power: Not Reported 00:23:29.145 Non-Operational Permissive Mode: Not Supported 00:23:29.145 00:23:29.145 Health Information 00:23:29.145 ================== 00:23:29.145 Critical Warnings: 00:23:29.145 Available Spare Space: OK 00:23:29.145 Temperature: OK 00:23:29.145 Device Reliability: OK 00:23:29.145 Read Only: No 00:23:29.145 Volatile Memory Backup: OK 00:23:29.145 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:29.145 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:29.145 Available Spare: 0% 00:23:29.145 Available Spare Threshold: 0% 00:23:29.145 Life Percentage Used:[2024-10-08 18:39:23.003108] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.003113] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2379620) 00:23:29.145 [2024-10-08 18:39:23.003120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.145 [2024-10-08 18:39:23.003132] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9f00, cid 7, qid 0 00:23:29.145 [2024-10-08 18:39:23.003324] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.145 [2024-10-08 18:39:23.003331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.145 [2024-10-08 18:39:23.003334] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.003338] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9f00) on tqpair=0x2379620 00:23:29.145 [2024-10-08 18:39:23.003373] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:29.145 [2024-10-08 18:39:23.003383] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9480) on tqpair=0x2379620 00:23:29.145 [2024-10-08 18:39:23.003390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.145 [2024-10-08 18:39:23.003395] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9600) on tqpair=0x2379620 00:23:29.145 [2024-10-08 18:39:23.003400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.145 [2024-10-08 18:39:23.003405] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9780) on tqpair=0x2379620 00:23:29.145 [2024-10-08 18:39:23.003409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.145 [2024-10-08 18:39:23.003414] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.145 [2024-10-08 18:39:23.003419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.145 [2024-10-08 18:39:23.003427] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.003431] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.003435] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.145 [2024-10-08 18:39:23.003442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.145 [2024-10-08 18:39:23.003453] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.145 [2024-10-08 18:39:23.003676] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.145 [2024-10-08 18:39:23.003683] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.145 [2024-10-08 18:39:23.003686] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.003690] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.145 [2024-10-08 18:39:23.003698] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.003701] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.003705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.145 [2024-10-08 18:39:23.003712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.145 [2024-10-08 18:39:23.003725] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.145 [2024-10-08 18:39:23.003933] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.145 [2024-10-08 18:39:23.003939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.145 [2024-10-08 18:39:23.003943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.003947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.145 [2024-10-08 18:39:23.003951] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:29.145 [2024-10-08 18:39:23.003956] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:29.145 [2024-10-08 18:39:23.003966] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.003969] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.003983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.145 [2024-10-08 18:39:23.003990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.145 [2024-10-08 18:39:23.004002] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.145 [2024-10-08 18:39:23.004180] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.145 [2024-10-08 18:39:23.004187] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.145 [2024-10-08 18:39:23.004191] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.004195] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.145 [2024-10-08 18:39:23.004205] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.004209] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.145 [2024-10-08 18:39:23.004213] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.146 [2024-10-08 18:39:23.004220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.146 [2024-10-08 18:39:23.004230] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.146 [2024-10-08 18:39:23.004464] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.146 [2024-10-08 18:39:23.004471] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.146 [2024-10-08 18:39:23.004474] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.004478] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.146 [2024-10-08 18:39:23.004489] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.004493] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.004497] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.146 [2024-10-08 18:39:23.004503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.146 [2024-10-08 18:39:23.004516] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.146 [2024-10-08 18:39:23.004715] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.146 [2024-10-08 18:39:23.004721] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.146 [2024-10-08 18:39:23.004725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.004729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.146 [2024-10-08 18:39:23.004739] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.004743] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.004746] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.146 [2024-10-08 18:39:23.004753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.146 [2024-10-08 18:39:23.004763] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.146 [2024-10-08 18:39:23.004965] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.146 [2024-10-08 18:39:23.004971] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.146 [2024-10-08 18:39:23.004982] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.004986] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.146 [2024-10-08 18:39:23.004996] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.004999] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.005003] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.146 [2024-10-08 18:39:23.005010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.146 [2024-10-08 18:39:23.005020] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.146 [2024-10-08 18:39:23.005260] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.146 [2024-10-08 18:39:23.005266] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.146 [2024-10-08 18:39:23.005269] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.005273] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.146 [2024-10-08 18:39:23.005283] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.005287] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.005291] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.146 [2024-10-08 18:39:23.005298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.146 [2024-10-08 18:39:23.005308] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.146 [2024-10-08 18:39:23.005503] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.146 [2024-10-08 18:39:23.005510] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.146 [2024-10-08 18:39:23.005513] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.005517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.146 [2024-10-08 18:39:23.005527] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.005531] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.005534] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.146 [2024-10-08 18:39:23.005542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.146 [2024-10-08 18:39:23.005553] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.146 [2024-10-08 18:39:23.005734] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.146 [2024-10-08 18:39:23.005740] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.146 [2024-10-08 18:39:23.005744] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.005748] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.146 [2024-10-08 18:39:23.005757] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.005761] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.005765] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.146 [2024-10-08 18:39:23.005772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.146 [2024-10-08 18:39:23.005782] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.146 [2024-10-08 18:39:23.009990] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.146 [2024-10-08 18:39:23.009999] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.146 [2024-10-08 18:39:23.010003] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.010007] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.146 [2024-10-08 18:39:23.010017] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.010021] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.010025] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379620) 00:23:29.146 [2024-10-08 18:39:23.010032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.146 [2024-10-08 18:39:23.010043] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23d9900, cid 3, qid 0 00:23:29.146 [2024-10-08 18:39:23.010222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.146 [2024-10-08 18:39:23.010229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.146 [2024-10-08 18:39:23.010232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.146 [2024-10-08 18:39:23.010236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23d9900) on tqpair=0x2379620 00:23:29.146 [2024-10-08 18:39:23.010244] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:29.146 0% 00:23:29.146 Data Units Read: 0 00:23:29.146 Data Units Written: 0 00:23:29.146 Host Read Commands: 0 00:23:29.146 Host Write Commands: 0 00:23:29.146 Controller Busy Time: 0 minutes 00:23:29.146 Power Cycles: 0 00:23:29.146 Power On Hours: 0 hours 00:23:29.146 Unsafe Shutdowns: 0 00:23:29.146 Unrecoverable Media Errors: 0 00:23:29.146 Lifetime Error Log Entries: 0 00:23:29.146 Warning Temperature Time: 0 minutes 00:23:29.146 Critical Temperature Time: 0 minutes 00:23:29.146 00:23:29.146 Number of Queues 00:23:29.146 ================ 00:23:29.146 Number of I/O Submission Queues: 127 00:23:29.146 Number of I/O Completion Queues: 127 00:23:29.146 00:23:29.146 Active Namespaces 00:23:29.146 ================= 00:23:29.146 Namespace ID:1 00:23:29.146 Error Recovery Timeout: Unlimited 00:23:29.146 Command Set Identifier: NVM (00h) 00:23:29.146 Deallocate: Supported 00:23:29.146 Deallocated/Unwritten Error: Not Supported 00:23:29.146 Deallocated Read Value: Unknown 00:23:29.146 Deallocate in Write Zeroes: Not Supported 00:23:29.146 Deallocated Guard Field: 0xFFFF 00:23:29.146 Flush: Supported 00:23:29.146 Reservation: Supported 00:23:29.146 Namespace Sharing Capabilities: Multiple Controllers 00:23:29.146 Size (in LBAs): 131072 (0GiB) 00:23:29.146 Capacity (in LBAs): 131072 (0GiB) 00:23:29.146 Utilization (in LBAs): 131072 (0GiB) 00:23:29.146 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:29.146 EUI64: ABCDEF0123456789 00:23:29.146 UUID: 6a9cf2da-fb82-449a-a736-c835284020ff 00:23:29.146 Thin Provisioning: Not Supported 00:23:29.146 Per-NS Atomic Units: Yes 00:23:29.146 Atomic Boundary Size (Normal): 0 00:23:29.146 Atomic Boundary Size (PFail): 0 00:23:29.146 Atomic Boundary Offset: 0 00:23:29.146 Maximum Single Source Range Length: 65535 00:23:29.146 Maximum Copy Length: 65535 00:23:29.146 Maximum Source Range Count: 1 00:23:29.146 NGUID/EUI64 Never Reused: No 00:23:29.146 Namespace Write Protected: No 00:23:29.146 Number of LBA Formats: 1 00:23:29.146 Current LBA Format: LBA Format #00 00:23:29.146 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:29.146 00:23:29.146 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:29.146 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.146 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.146 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.146 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.146 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:29.146 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:29.146 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:29.146 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:29.146 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.147 rmmod nvme_tcp 00:23:29.147 rmmod nvme_fabrics 00:23:29.147 rmmod nvme_keyring 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1316426 ']' 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1316426 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1316426 ']' 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1316426 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1316426 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1316426' 00:23:29.147 killing process with pid 1316426 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1316426 00:23:29.147 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1316426 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.408 18:39:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:31.954 00:23:31.954 real 0m11.958s 00:23:31.954 user 0m9.006s 00:23:31.954 sys 0m6.263s 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.954 ************************************ 00:23:31.954 END TEST nvmf_identify 00:23:31.954 ************************************ 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.954 ************************************ 00:23:31.954 START TEST nvmf_perf 00:23:31.954 ************************************ 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:31.954 * Looking for test storage... 00:23:31.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:31.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.954 --rc genhtml_branch_coverage=1 00:23:31.954 --rc genhtml_function_coverage=1 00:23:31.954 --rc genhtml_legend=1 00:23:31.954 --rc geninfo_all_blocks=1 00:23:31.954 --rc geninfo_unexecuted_blocks=1 00:23:31.954 00:23:31.954 ' 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:31.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.954 --rc genhtml_branch_coverage=1 00:23:31.954 --rc genhtml_function_coverage=1 00:23:31.954 --rc genhtml_legend=1 00:23:31.954 --rc geninfo_all_blocks=1 00:23:31.954 --rc geninfo_unexecuted_blocks=1 00:23:31.954 00:23:31.954 ' 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:31.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.954 --rc genhtml_branch_coverage=1 00:23:31.954 --rc genhtml_function_coverage=1 00:23:31.954 --rc genhtml_legend=1 00:23:31.954 --rc geninfo_all_blocks=1 00:23:31.954 --rc geninfo_unexecuted_blocks=1 00:23:31.954 00:23:31.954 ' 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:31.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.954 --rc genhtml_branch_coverage=1 00:23:31.954 --rc genhtml_function_coverage=1 00:23:31.954 --rc genhtml_legend=1 00:23:31.954 --rc geninfo_all_blocks=1 00:23:31.954 --rc geninfo_unexecuted_blocks=1 00:23:31.954 00:23:31.954 ' 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.954 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.955 18:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:40.096 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.096 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:40.097 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:40.097 Found net devices under 0000:31:00.0: cvl_0_0 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:40.097 Found net devices under 0000:31:00.1: cvl_0_1 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:23:40.097 00:23:40.097 --- 10.0.0.2 ping statistics --- 00:23:40.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.097 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:40.097 00:23:40.097 --- 10.0.0.1 ping statistics --- 00:23:40.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.097 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1320999 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1320999 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1320999 ']' 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:40.097 18:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.097 [2024-10-08 18:39:33.582197] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:23:40.097 [2024-10-08 18:39:33.582266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.097 [2024-10-08 18:39:33.670316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:40.097 [2024-10-08 18:39:33.766861] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.097 [2024-10-08 18:39:33.766922] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.097 [2024-10-08 18:39:33.766930] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.097 [2024-10-08 18:39:33.766938] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.097 [2024-10-08 18:39:33.766944] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.097 [2024-10-08 18:39:33.769023] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.097 [2024-10-08 18:39:33.769110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.097 [2024-10-08 18:39:33.769264] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.097 [2024-10-08 18:39:33.769263] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.358 18:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.358 18:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:40.358 18:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:40.358 18:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:40.358 18:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.619 18:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.619 18:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:40.619 18:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:41.191 18:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:41.191 18:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:41.191 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:41.191 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:41.452 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:41.452 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:41.452 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:41.452 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:41.452 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:41.713 [2024-10-08 18:39:35.543547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.713 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.974 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:41.974 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.974 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:41.974 18:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:42.235 18:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.496 [2024-10-08 18:39:36.347265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.496 18:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:42.757 18:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:42.757 18:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:42.757 18:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:42.757 18:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:44.140 Initializing NVMe Controllers 00:23:44.140 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:44.140 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:44.140 Initialization complete. Launching workers. 00:23:44.140 ======================================================== 00:23:44.140 Latency(us) 00:23:44.140 Device Information : IOPS MiB/s Average min max 00:23:44.140 PCIE (0000:65:00.0) NSID 1 from core 0: 77107.85 301.20 414.30 14.07 5079.11 00:23:44.140 ======================================================== 00:23:44.140 Total : 77107.85 301.20 414.30 14.07 5079.11 00:23:44.140 00:23:44.140 18:39:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:45.523 Initializing NVMe Controllers 00:23:45.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:45.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:45.523 Initialization complete. Launching workers. 00:23:45.523 ======================================================== 00:23:45.523 Latency(us) 00:23:45.523 Device Information : IOPS MiB/s Average min max 00:23:45.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 108.00 0.42 9267.87 219.14 46174.53 00:23:45.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.00 0.18 23056.04 7959.33 47895.28 00:23:45.523 ======================================================== 00:23:45.523 Total : 153.00 0.60 13323.21 219.14 47895.28 00:23:45.523 00:23:45.523 18:39:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.906 Initializing NVMe Controllers 00:23:46.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:46.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:46.906 Initialization complete. Launching workers. 00:23:46.906 ======================================================== 00:23:46.907 Latency(us) 00:23:46.907 Device Information : IOPS MiB/s Average min max 00:23:46.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11916.00 46.55 2687.51 466.07 8579.92 00:23:46.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3740.00 14.61 8602.49 6571.56 16162.33 00:23:46.907 ======================================================== 00:23:46.907 Total : 15656.00 61.16 4100.52 466.07 16162.33 00:23:46.907 00:23:46.907 18:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:46.907 18:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:46.907 18:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:49.450 Initializing NVMe Controllers 00:23:49.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.450 Controller IO queue size 128, less than required. 00:23:49.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.450 Controller IO queue size 128, less than required. 00:23:49.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:49.450 Initialization complete. Launching workers. 00:23:49.450 ======================================================== 00:23:49.450 Latency(us) 00:23:49.450 Device Information : IOPS MiB/s Average min max 00:23:49.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2369.46 592.37 55012.19 35302.83 98194.64 00:23:49.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 613.99 153.50 215049.52 64499.31 309566.66 00:23:49.450 ======================================================== 00:23:49.450 Total : 2983.46 745.86 87947.65 35302.83 309566.66 00:23:49.450 00:23:49.450 18:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:49.450 No valid NVMe controllers or AIO or URING devices found 00:23:49.450 Initializing NVMe Controllers 00:23:49.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.450 Controller IO queue size 128, less than required. 00:23:49.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.450 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:49.450 Controller IO queue size 128, less than required. 00:23:49.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.450 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:49.450 WARNING: Some requested NVMe devices were skipped 00:23:49.450 18:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:51.993 Initializing NVMe Controllers 00:23:51.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.993 Controller IO queue size 128, less than required. 00:23:51.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.993 Controller IO queue size 128, less than required. 00:23:51.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:51.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:51.993 Initialization complete. Launching workers. 00:23:51.993 00:23:51.993 ==================== 00:23:51.993 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:51.993 TCP transport: 00:23:51.993 polls: 57118 00:23:51.993 idle_polls: 42708 00:23:51.993 sock_completions: 14410 00:23:51.993 nvme_completions: 6805 00:23:51.993 submitted_requests: 10140 00:23:51.993 queued_requests: 1 00:23:51.993 00:23:51.993 ==================== 00:23:51.993 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:51.993 TCP transport: 00:23:51.993 polls: 32043 00:23:51.993 idle_polls: 18678 00:23:51.993 sock_completions: 13365 00:23:51.993 nvme_completions: 7917 00:23:51.993 submitted_requests: 11960 00:23:51.993 queued_requests: 1 00:23:51.993 ======================================================== 00:23:51.993 Latency(us) 00:23:51.993 Device Information : IOPS MiB/s Average min max 00:23:51.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1700.97 425.24 76928.02 40205.82 140909.84 00:23:51.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1978.97 494.74 64993.16 30659.60 118413.00 00:23:51.993 ======================================================== 00:23:51.993 Total : 3679.94 919.99 70509.79 30659.60 140909.84 00:23:51.993 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.994 18:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.994 rmmod nvme_tcp 00:23:51.994 rmmod nvme_fabrics 00:23:51.994 rmmod nvme_keyring 00:23:51.994 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.994 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:51.994 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:51.994 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1320999 ']' 00:23:51.994 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1320999 00:23:51.994 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1320999 ']' 00:23:51.994 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1320999 00:23:52.254 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:52.254 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.254 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1320999 00:23:52.254 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:52.254 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:52.254 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1320999' 00:23:52.254 killing process with pid 1320999 00:23:52.254 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1320999 00:23:52.254 18:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1320999 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.166 18:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.707 00:23:56.707 real 0m24.607s 00:23:56.707 user 0m58.722s 00:23:56.707 sys 0m8.912s 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:56.707 ************************************ 00:23:56.707 END TEST nvmf_perf 00:23:56.707 ************************************ 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.707 ************************************ 00:23:56.707 START TEST nvmf_fio_host 00:23:56.707 ************************************ 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:56.707 * Looking for test storage... 00:23:56.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:56.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.707 --rc genhtml_branch_coverage=1 00:23:56.707 --rc genhtml_function_coverage=1 00:23:56.707 --rc genhtml_legend=1 00:23:56.707 --rc geninfo_all_blocks=1 00:23:56.707 --rc geninfo_unexecuted_blocks=1 00:23:56.707 00:23:56.707 ' 00:23:56.707 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:56.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.707 --rc genhtml_branch_coverage=1 00:23:56.707 --rc genhtml_function_coverage=1 00:23:56.708 --rc genhtml_legend=1 00:23:56.708 --rc geninfo_all_blocks=1 00:23:56.708 --rc geninfo_unexecuted_blocks=1 00:23:56.708 00:23:56.708 ' 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:56.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.708 --rc genhtml_branch_coverage=1 00:23:56.708 --rc genhtml_function_coverage=1 00:23:56.708 --rc genhtml_legend=1 00:23:56.708 --rc geninfo_all_blocks=1 00:23:56.708 --rc geninfo_unexecuted_blocks=1 00:23:56.708 00:23:56.708 ' 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:56.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.708 --rc genhtml_branch_coverage=1 00:23:56.708 --rc genhtml_function_coverage=1 00:23:56.708 --rc genhtml_legend=1 00:23:56.708 --rc geninfo_all_blocks=1 00:23:56.708 --rc geninfo_unexecuted_blocks=1 00:23:56.708 00:23:56.708 ' 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:56.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:56.708 18:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:04.841 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:04.841 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:04.841 Found net devices under 0000:31:00.0: cvl_0_0 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:04.841 Found net devices under 0000:31:00.1: cvl_0_1 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.841 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:04.842 18:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:04.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:24:04.842 00:24:04.842 --- 10.0.0.2 ping statistics --- 00:24:04.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.842 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:24:04.842 00:24:04.842 --- 10.0.0.1 ping statistics --- 00:24:04.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.842 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1328075 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1328075 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1328075 ']' 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:04.842 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.842 [2024-10-08 18:39:58.163152] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:24:04.842 [2024-10-08 18:39:58.163221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.842 [2024-10-08 18:39:58.254097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.842 [2024-10-08 18:39:58.350059] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.842 [2024-10-08 18:39:58.350119] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.842 [2024-10-08 18:39:58.350128] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.842 [2024-10-08 18:39:58.350135] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.842 [2024-10-08 18:39:58.350141] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.842 [2024-10-08 18:39:58.352583] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.842 [2024-10-08 18:39:58.352745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.842 [2024-10-08 18:39:58.352905] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.842 [2024-10-08 18:39:58.352906] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.102 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.102 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:05.102 18:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:05.362 [2024-10-08 18:39:59.233883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.362 18:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:05.362 18:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:05.362 18:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.362 18:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:05.622 Malloc1 00:24:05.622 18:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:05.883 18:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:05.883 18:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.144 [2024-10-08 18:40:00.099249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.144 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:06.406 18:40:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.667 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:06.667 fio-3.35 00:24:06.667 Starting 1 thread 00:24:09.207 00:24:09.207 test: (groupid=0, jobs=1): err= 0: pid=1328832: Tue Oct 8 18:40:03 2024 00:24:09.207 read: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(107MiB/2004msec) 00:24:09.207 slat (usec): min=2, max=283, avg= 2.15, stdev= 2.43 00:24:09.207 clat (usec): min=3598, max=8949, avg=5140.85, stdev=529.39 00:24:09.207 lat (usec): min=3600, max=8956, avg=5143.00, stdev=529.58 00:24:09.207 clat percentiles (usec): 00:24:09.207 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:09.207 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:24:09.207 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5735], 00:24:09.207 | 99.00th=[ 7767], 99.50th=[ 8029], 99.90th=[ 8717], 99.95th=[ 8717], 00:24:09.207 | 99.99th=[ 8848] 00:24:09.207 bw ( KiB/s): min=50072, max=56344, per=99.93%, avg=54692.00, stdev=3081.08, samples=4 00:24:09.207 iops : min=12518, max=14086, avg=13673.00, stdev=770.27, samples=4 00:24:09.207 write: IOPS=13.7k, BW=53.4MiB/s (55.9MB/s)(107MiB/2004msec); 0 zone resets 00:24:09.207 slat (usec): min=2, max=275, avg= 2.22, stdev= 1.83 00:24:09.207 clat (usec): min=2928, max=7912, avg=4162.18, stdev=447.46 00:24:09.207 lat (usec): min=2946, max=7914, avg=4164.40, stdev=447.72 00:24:09.207 clat percentiles (usec): 00:24:09.207 | 1.00th=[ 3458], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:24:09.207 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:09.207 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4686], 00:24:09.207 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7046], 99.95th=[ 7308], 00:24:09.207 | 99.99th=[ 7767] 00:24:09.207 bw ( KiB/s): min=50568, max=56256, per=100.00%, avg=54642.00, stdev=2722.53, samples=4 00:24:09.207 iops : min=12642, max=14064, avg=13660.50, stdev=680.63, samples=4 00:24:09.208 lat (msec) : 4=17.25%, 10=82.75% 00:24:09.208 cpu : usr=75.99%, sys=22.57%, ctx=60, majf=0, minf=17 00:24:09.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:09.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:09.208 issued rwts: total=27421,27373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.208 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:09.208 00:24:09.208 Run status group 0 (all jobs): 00:24:09.208 READ: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=107MiB (112MB), run=2004-2004msec 00:24:09.208 WRITE: bw=53.4MiB/s (55.9MB/s), 53.4MiB/s-53.4MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2004-2004msec 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:09.208 18:40:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.468 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:09.468 fio-3.35 00:24:09.468 Starting 1 thread 00:24:12.007 00:24:12.007 test: (groupid=0, jobs=1): err= 0: pid=1329493: Tue Oct 8 18:40:05 2024 00:24:12.007 read: IOPS=9254, BW=145MiB/s (152MB/s)(290MiB/2006msec) 00:24:12.007 slat (usec): min=3, max=144, avg= 3.63, stdev= 1.85 00:24:12.007 clat (usec): min=1616, max=49821, avg=8478.84, stdev=3233.67 00:24:12.007 lat (usec): min=1620, max=49825, avg=8482.48, stdev=3233.76 00:24:12.007 clat percentiles (usec): 00:24:12.007 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5735], 20.00th=[ 6456], 00:24:12.007 | 30.00th=[ 7046], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 8848], 00:24:12.007 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[10814], 95.00th=[11731], 00:24:12.007 | 99.00th=[13566], 99.50th=[14746], 99.90th=[45876], 99.95th=[47973], 00:24:12.007 | 99.99th=[49546] 00:24:12.007 bw ( KiB/s): min=69696, max=77504, per=49.31%, avg=73016.00, stdev=3442.18, samples=4 00:24:12.007 iops : min= 4356, max= 4844, avg=4563.50, stdev=215.14, samples=4 00:24:12.007 write: IOPS=5282, BW=82.5MiB/s (86.6MB/s)(148MiB/1793msec); 0 zone resets 00:24:12.007 slat (usec): min=39, max=341, avg=40.97, stdev= 7.66 00:24:12.007 clat (usec): min=2016, max=51485, avg=9274.84, stdev=3045.03 00:24:12.007 lat (usec): min=2062, max=51524, avg=9315.81, stdev=3045.81 00:24:12.007 clat percentiles (usec): 00:24:12.007 | 1.00th=[ 6194], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7898], 00:24:12.007 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:24:12.007 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11469], 00:24:12.007 | 99.00th=[13304], 99.50th=[14746], 99.90th=[50594], 99.95th=[51119], 00:24:12.007 | 99.99th=[51643] 00:24:12.007 bw ( KiB/s): min=73088, max=80384, per=89.65%, avg=75776.00, stdev=3388.68, samples=4 00:24:12.007 iops : min= 4568, max= 5024, avg=4736.00, stdev=211.79, samples=4 00:24:12.007 lat (msec) : 2=0.04%, 4=0.57%, 10=75.33%, 20=23.61%, 50=0.39% 00:24:12.007 lat (msec) : 100=0.07% 00:24:12.007 cpu : usr=84.19%, sys=14.31%, ctx=12, majf=0, minf=29 00:24:12.007 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:12.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:12.007 issued rwts: total=18565,9472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:12.007 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:12.007 00:24:12.007 Run status group 0 (all jobs): 00:24:12.007 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=290MiB (304MB), run=2006-2006msec 00:24:12.007 WRITE: bw=82.5MiB/s (86.6MB/s), 82.5MiB/s-82.5MiB/s (86.6MB/s-86.6MB/s), io=148MiB (155MB), run=1793-1793msec 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:12.007 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:12.007 rmmod nvme_tcp 00:24:12.007 rmmod nvme_fabrics 00:24:12.008 rmmod nvme_keyring 00:24:12.008 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:12.008 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:12.008 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:12.008 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1328075 ']' 00:24:12.008 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1328075 00:24:12.008 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1328075 ']' 00:24:12.008 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1328075 00:24:12.008 18:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:12.008 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.008 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1328075 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1328075' 00:24:12.268 killing process with pid 1328075 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1328075 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1328075 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.268 18:40:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:14.811 00:24:14.811 real 0m18.033s 00:24:14.811 user 1m0.328s 00:24:14.811 sys 0m7.801s 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.811 ************************************ 00:24:14.811 END TEST nvmf_fio_host 00:24:14.811 ************************************ 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.811 ************************************ 00:24:14.811 START TEST nvmf_failover 00:24:14.811 ************************************ 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:14.811 * Looking for test storage... 00:24:14.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:14.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.811 --rc genhtml_branch_coverage=1 00:24:14.811 --rc genhtml_function_coverage=1 00:24:14.811 --rc genhtml_legend=1 00:24:14.811 --rc geninfo_all_blocks=1 00:24:14.811 --rc geninfo_unexecuted_blocks=1 00:24:14.811 00:24:14.811 ' 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:14.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.811 --rc genhtml_branch_coverage=1 00:24:14.811 --rc genhtml_function_coverage=1 00:24:14.811 --rc genhtml_legend=1 00:24:14.811 --rc geninfo_all_blocks=1 00:24:14.811 --rc geninfo_unexecuted_blocks=1 00:24:14.811 00:24:14.811 ' 00:24:14.811 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:14.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.811 --rc genhtml_branch_coverage=1 00:24:14.811 --rc genhtml_function_coverage=1 00:24:14.811 --rc genhtml_legend=1 00:24:14.812 --rc geninfo_all_blocks=1 00:24:14.812 --rc geninfo_unexecuted_blocks=1 00:24:14.812 00:24:14.812 ' 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:14.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.812 --rc genhtml_branch_coverage=1 00:24:14.812 --rc genhtml_function_coverage=1 00:24:14.812 --rc genhtml_legend=1 00:24:14.812 --rc geninfo_all_blocks=1 00:24:14.812 --rc geninfo_unexecuted_blocks=1 00:24:14.812 00:24:14.812 ' 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:14.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:14.812 18:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:22.955 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:22.955 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:22.955 Found net devices under 0000:31:00.0: cvl_0_0 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:22.955 Found net devices under 0000:31:00.1: cvl_0_1 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.955 18:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.955 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.955 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.955 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.955 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.955 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.955 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.955 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.955 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:24:22.956 00:24:22.956 --- 10.0.0.2 ping statistics --- 00:24:22.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.956 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:24:22.956 00:24:22.956 --- 10.0.0.1 ping statistics --- 00:24:22.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.956 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1334342 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1334342 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1334342 ']' 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:22.956 18:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.956 [2024-10-08 18:40:16.393338] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:24:22.956 [2024-10-08 18:40:16.393400] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.956 [2024-10-08 18:40:16.485393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:22.956 [2024-10-08 18:40:16.579705] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.956 [2024-10-08 18:40:16.579767] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.956 [2024-10-08 18:40:16.579776] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.956 [2024-10-08 18:40:16.579783] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.956 [2024-10-08 18:40:16.579789] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.956 [2024-10-08 18:40:16.581126] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.956 [2024-10-08 18:40:16.581444] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.956 [2024-10-08 18:40:16.581444] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.218 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.218 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:23.218 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:23.218 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.218 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:23.218 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.218 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:23.478 [2024-10-08 18:40:17.431941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.478 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:23.739 Malloc0 00:24:23.739 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:24.000 18:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.261 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.261 [2024-10-08 18:40:18.267561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.261 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:24.522 [2024-10-08 18:40:18.452118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:24.522 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:24.782 [2024-10-08 18:40:18.652770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:24.782 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1334749 00:24:24.782 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:24.782 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:24.782 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1334749 /var/tmp/bdevperf.sock 00:24:24.782 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1334749 ']' 00:24:24.782 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.782 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:24.782 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.782 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:24.782 18:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:25.752 18:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.752 18:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:25.752 18:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:25.752 NVMe0n1 00:24:25.752 18:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:26.061 00:24:26.061 18:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1335087 00:24:26.061 18:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:26.061 18:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.018 18:40:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.280 [2024-10-08 18:40:21.230218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 [2024-10-08 18:40:21.230614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6f410 is same with the state(6) to be set 00:24:27.280 18:40:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:30.575 18:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:30.575 00:24:30.575 18:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:30.835 [2024-10-08 18:40:24.683643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 [2024-10-08 18:40:24.683920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb701c0 is same with the state(6) to be set 00:24:30.835 18:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:34.128 18:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.128 [2024-10-08 18:40:27.859388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.128 18:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:35.076 18:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:35.076 [2024-10-08 18:40:29.054035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.076 [2024-10-08 18:40:29.054392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 [2024-10-08 18:40:29.054646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71130 is same with the state(6) to be set 00:24:35.077 18:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1335087 00:24:41.660 { 00:24:41.660 "results": [ 00:24:41.660 { 00:24:41.660 "job": "NVMe0n1", 00:24:41.660 "core_mask": "0x1", 00:24:41.660 "workload": "verify", 00:24:41.660 "status": "finished", 00:24:41.660 "verify_range": { 00:24:41.660 "start": 0, 00:24:41.660 "length": 16384 00:24:41.660 }, 00:24:41.660 "queue_depth": 128, 00:24:41.660 "io_size": 4096, 00:24:41.660 "runtime": 15.01072, 00:24:41.660 "iops": 12570.149866228936, 00:24:41.660 "mibps": 49.10214791495678, 00:24:41.660 "io_failed": 4437, 00:24:41.660 "io_timeout": 0, 00:24:41.660 "avg_latency_us": 9928.406696906306, 00:24:41.660 "min_latency_us": 549.5466666666666, 00:24:41.660 "max_latency_us": 20753.066666666666 00:24:41.660 } 00:24:41.660 ], 00:24:41.660 "core_count": 1 00:24:41.660 } 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1334749 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1334749 ']' 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1334749 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334749 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334749' 00:24:41.660 killing process with pid 1334749 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1334749 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1334749 00:24:41.660 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:41.660 [2024-10-08 18:40:18.739875] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:24:41.661 [2024-10-08 18:40:18.739952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334749 ] 00:24:41.661 [2024-10-08 18:40:18.822952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.661 [2024-10-08 18:40:18.902164] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.661 Running I/O for 15 seconds... 00:24:41.661 11074.00 IOPS, 43.26 MiB/s [2024-10-08T16:40:35.718Z] [2024-10-08 18:40:21.231518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.231985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.231993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.661 [2024-10-08 18:40:21.232223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.661 [2024-10-08 18:40:21.232230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.662 [2024-10-08 18:40:21.232872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.662 [2024-10-08 18:40:21.232889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.662 [2024-10-08 18:40:21.232905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.662 [2024-10-08 18:40:21.232922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.662 [2024-10-08 18:40:21.232931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.232939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.232949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.232956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.232965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.232972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.232986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.232994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.663 [2024-10-08 18:40:21.233010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.663 [2024-10-08 18:40:21.233542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.663 [2024-10-08 18:40:21.233573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96192 len:8 PRP1 0x0 PRP2 0x0 00:24:41.663 [2024-10-08 18:40:21.233580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.663 [2024-10-08 18:40:21.233596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.663 [2024-10-08 18:40:21.233602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96200 len:8 PRP1 0x0 PRP2 0x0 00:24:41.663 [2024-10-08 18:40:21.233610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.663 [2024-10-08 18:40:21.233623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.663 [2024-10-08 18:40:21.233629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 00:24:41.663 [2024-10-08 18:40:21.233636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.663 [2024-10-08 18:40:21.233644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.663 [2024-10-08 18:40:21.233650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.664 [2024-10-08 18:40:21.233656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96216 len:8 PRP1 0x0 PRP2 0x0 00:24:41.664 [2024-10-08 18:40:21.233663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.233673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.664 [2024-10-08 18:40:21.233679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.664 [2024-10-08 18:40:21.233685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96224 len:8 PRP1 0x0 PRP2 0x0 00:24:41.664 [2024-10-08 18:40:21.233692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.233704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.664 [2024-10-08 18:40:21.233710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.664 [2024-10-08 18:40:21.233716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96232 len:8 PRP1 0x0 PRP2 0x0 00:24:41.664 [2024-10-08 18:40:21.233723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.233731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.664 [2024-10-08 18:40:21.233737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.664 [2024-10-08 18:40:21.233743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96240 len:8 PRP1 0x0 PRP2 0x0 00:24:41.664 [2024-10-08 18:40:21.233751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.233758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.664 [2024-10-08 18:40:21.233764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.664 [2024-10-08 18:40:21.233770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96248 len:8 PRP1 0x0 PRP2 0x0 00:24:41.664 [2024-10-08 18:40:21.233777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.233784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.664 [2024-10-08 18:40:21.233790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.664 [2024-10-08 18:40:21.233796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:24:41.664 [2024-10-08 18:40:21.233803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.233811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.664 [2024-10-08 18:40:21.233816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.664 [2024-10-08 18:40:21.233822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96264 len:8 PRP1 0x0 PRP2 0x0 00:24:41.664 [2024-10-08 18:40:21.233829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.233863] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fd85a0 was disconnected and freed. reset controller. 00:24:41.664 [2024-10-08 18:40:21.233872] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:41.664 [2024-10-08 18:40:21.233892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.664 [2024-10-08 18:40:21.233900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.244869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.664 [2024-10-08 18:40:21.244898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.244913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.664 [2024-10-08 18:40:21.244921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.244929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.664 [2024-10-08 18:40:21.244937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:21.244945] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.664 [2024-10-08 18:40:21.245005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb7e40 (9): Bad file descriptor 00:24:41.664 [2024-10-08 18:40:21.248548] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.664 [2024-10-08 18:40:21.283473] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:41.664 11004.50 IOPS, 42.99 MiB/s [2024-10-08T16:40:35.721Z] 11085.67 IOPS, 43.30 MiB/s [2024-10-08T16:40:35.721Z] 11554.75 IOPS, 45.14 MiB/s [2024-10-08T16:40:35.721Z] [2024-10-08 18:40:24.684984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.664 [2024-10-08 18:40:24.685272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.664 [2024-10-08 18:40:24.685278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.665 [2024-10-08 18:40:24.685703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.665 [2024-10-08 18:40:24.685708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.666 [2024-10-08 18:40:24.685721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.685988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.685995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.666 [2024-10-08 18:40:24.686195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.666 [2024-10-08 18:40:24.686201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.667 [2024-10-08 18:40:24.686207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.667 [2024-10-08 18:40:24.686218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.667 [2024-10-08 18:40:24.686230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.667 [2024-10-08 18:40:24.686241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.667 [2024-10-08 18:40:24.686252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.667 [2024-10-08 18:40:24.686264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.667 [2024-10-08 18:40:24.686276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38320 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38328 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38336 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38344 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38352 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38360 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38368 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38376 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38384 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38392 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38400 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38408 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38416 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38424 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38432 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.686574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.686578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.686583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38440 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.686587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.697456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.697480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.697489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38448 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.697501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.697507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.697512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.697517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38456 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.697523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.697529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.697533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.697538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38464 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.697544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.697550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.667 [2024-10-08 18:40:24.697554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.667 [2024-10-08 18:40:24.697559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38472 len:8 PRP1 0x0 PRP2 0x0 00:24:41.667 [2024-10-08 18:40:24.697565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.667 [2024-10-08 18:40:24.697600] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fda570 was disconnected and freed. reset controller. 00:24:41.667 [2024-10-08 18:40:24.697609] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:41.667 [2024-10-08 18:40:24.697633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.667 [2024-10-08 18:40:24.697640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:24.697648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.668 [2024-10-08 18:40:24.697654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:24.697661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.668 [2024-10-08 18:40:24.697667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:24.697673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.668 [2024-10-08 18:40:24.697679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:24.697685] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.668 [2024-10-08 18:40:24.697720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb7e40 (9): Bad file descriptor 00:24:41.668 [2024-10-08 18:40:24.700577] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.668 [2024-10-08 18:40:24.729435] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:41.668 11733.80 IOPS, 45.84 MiB/s [2024-10-08T16:40:35.725Z] 11940.83 IOPS, 46.64 MiB/s [2024-10-08T16:40:35.725Z] 12088.86 IOPS, 47.22 MiB/s [2024-10-08T16:40:35.725Z] 12204.00 IOPS, 47.67 MiB/s [2024-10-08T16:40:35.725Z] [2024-10-08 18:40:29.055910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.055942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.055956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.055962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.055969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.055982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.055989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.055994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.668 [2024-10-08 18:40:29.056376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.668 [2024-10-08 18:40:29.056382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.669 [2024-10-08 18:40:29.056663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.669 [2024-10-08 18:40:29.056820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.669 [2024-10-08 18:40:29.056824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.056994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.056999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.670 [2024-10-08 18:40:29.057314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.670 [2024-10-08 18:40:29.057320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.671 [2024-10-08 18:40:29.057325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.671 [2024-10-08 18:40:29.057336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.671 [2024-10-08 18:40:29.057347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.671 [2024-10-08 18:40:29.057359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.671 [2024-10-08 18:40:29.057370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.671 [2024-10-08 18:40:29.057381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.671 [2024-10-08 18:40:29.057392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.671 [2024-10-08 18:40:29.057403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.671 [2024-10-08 18:40:29.057415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.671 [2024-10-08 18:40:29.057427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fda750 is same with the state(6) to be set 00:24:41.671 [2024-10-08 18:40:29.057440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.671 [2024-10-08 18:40:29.057444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.671 [2024-10-08 18:40:29.057449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104512 len:8 PRP1 0x0 PRP2 0x0 00:24:41.671 [2024-10-08 18:40:29.057454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057486] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fda750 was disconnected and freed. reset controller. 00:24:41.671 [2024-10-08 18:40:29.057492] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:41.671 [2024-10-08 18:40:29.057509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.671 [2024-10-08 18:40:29.057514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.671 [2024-10-08 18:40:29.057525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.671 [2024-10-08 18:40:29.057536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.671 [2024-10-08 18:40:29.057548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.671 [2024-10-08 18:40:29.057553] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.671 [2024-10-08 18:40:29.057571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb7e40 (9): Bad file descriptor 00:24:41.671 [2024-10-08 18:40:29.059997] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.671 [2024-10-08 18:40:29.099503] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:41.671 12225.89 IOPS, 47.76 MiB/s [2024-10-08T16:40:35.728Z] 12316.80 IOPS, 48.11 MiB/s [2024-10-08T16:40:35.728Z] 12377.55 IOPS, 48.35 MiB/s [2024-10-08T16:40:35.728Z] 12434.83 IOPS, 48.57 MiB/s [2024-10-08T16:40:35.728Z] 12482.69 IOPS, 48.76 MiB/s [2024-10-08T16:40:35.728Z] 12521.43 IOPS, 48.91 MiB/s [2024-10-08T16:40:35.728Z] 12570.73 IOPS, 49.10 MiB/s 00:24:41.671 Latency(us) 00:24:41.671 [2024-10-08T16:40:35.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.671 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:41.671 Verification LBA range: start 0x0 length 0x4000 00:24:41.671 NVMe0n1 : 15.01 12570.15 49.10 295.59 0.00 9928.41 549.55 20753.07 00:24:41.671 [2024-10-08T16:40:35.728Z] =================================================================================================================== 00:24:41.671 [2024-10-08T16:40:35.728Z] Total : 12570.15 49.10 295.59 0.00 9928.41 549.55 20753.07 00:24:41.671 Received shutdown signal, test time was about 15.000000 seconds 00:24:41.671 00:24:41.671 Latency(us) 00:24:41.671 [2024-10-08T16:40:35.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.671 [2024-10-08T16:40:35.728Z] =================================================================================================================== 00:24:41.671 [2024-10-08T16:40:35.728Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1338004 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1338004 /var/tmp/bdevperf.sock 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1338004 ']' 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:41.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:41.671 18:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:42.240 18:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.240 18:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:42.240 18:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:42.499 [2024-10-08 18:40:36.424444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:42.499 18:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:42.759 [2024-10-08 18:40:36.608889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:42.759 18:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:43.020 NVMe0n1 00:24:43.020 18:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:43.280 00:24:43.280 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:43.540 00:24:43.540 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.540 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:43.799 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:44.059 18:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:47.365 18:40:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:47.365 18:40:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:47.365 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:47.366 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1339125 00:24:47.366 18:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1339125 00:24:48.304 { 00:24:48.304 "results": [ 00:24:48.304 { 00:24:48.304 "job": "NVMe0n1", 00:24:48.304 "core_mask": "0x1", 00:24:48.304 "workload": "verify", 00:24:48.304 "status": "finished", 00:24:48.304 "verify_range": { 00:24:48.304 "start": 0, 00:24:48.304 "length": 16384 00:24:48.304 }, 00:24:48.304 "queue_depth": 128, 00:24:48.304 "io_size": 4096, 00:24:48.304 "runtime": 1.003951, 00:24:48.304 "iops": 13103.229141661297, 00:24:48.304 "mibps": 51.18448883461444, 00:24:48.304 "io_failed": 0, 00:24:48.304 "io_timeout": 0, 00:24:48.304 "avg_latency_us": 9727.303062460409, 00:24:48.304 "min_latency_us": 907.9466666666667, 00:24:48.304 "max_latency_us": 12397.226666666667 00:24:48.304 } 00:24:48.304 ], 00:24:48.304 "core_count": 1 00:24:48.304 } 00:24:48.304 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:48.304 [2024-10-08 18:40:35.476885] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:24:48.304 [2024-10-08 18:40:35.476942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338004 ] 00:24:48.304 [2024-10-08 18:40:35.554923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.304 [2024-10-08 18:40:35.608258] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.304 [2024-10-08 18:40:37.915305] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:48.304 [2024-10-08 18:40:37.915342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.304 [2024-10-08 18:40:37.915350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.304 [2024-10-08 18:40:37.915357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.304 [2024-10-08 18:40:37.915362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.304 [2024-10-08 18:40:37.915368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.304 [2024-10-08 18:40:37.915373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.304 [2024-10-08 18:40:37.915379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.304 [2024-10-08 18:40:37.915384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.304 [2024-10-08 18:40:37.915389] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.304 [2024-10-08 18:40:37.915410] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.304 [2024-10-08 18:40:37.915421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1737e40 (9): Bad file descriptor 00:24:48.304 [2024-10-08 18:40:37.961090] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:48.304 Running I/O for 1 seconds... 00:24:48.304 13012.00 IOPS, 50.83 MiB/s 00:24:48.304 Latency(us) 00:24:48.304 [2024-10-08T16:40:42.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.304 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:48.304 Verification LBA range: start 0x0 length 0x4000 00:24:48.304 NVMe0n1 : 1.00 13103.23 51.18 0.00 0.00 9727.30 907.95 12397.23 00:24:48.304 [2024-10-08T16:40:42.361Z] =================================================================================================================== 00:24:48.304 [2024-10-08T16:40:42.361Z] Total : 13103.23 51.18 0.00 0.00 9727.30 907.95 12397.23 00:24:48.304 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.304 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:48.564 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.824 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.824 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:48.825 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.084 18:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:52.378 18:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:52.378 18:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1338004 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1338004 ']' 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1338004 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1338004 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1338004' 00:24:52.378 killing process with pid 1338004 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1338004 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1338004 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:52.378 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.639 rmmod nvme_tcp 00:24:52.639 rmmod nvme_fabrics 00:24:52.639 rmmod nvme_keyring 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1334342 ']' 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1334342 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1334342 ']' 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1334342 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334342 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334342' 00:24:52.639 killing process with pid 1334342 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1334342 00:24:52.639 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1334342 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.899 18:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.440 18:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:55.440 00:24:55.440 real 0m40.539s 00:24:55.440 user 2m3.687s 00:24:55.440 sys 0m8.935s 00:24:55.440 18:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:55.440 18:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:55.440 ************************************ 00:24:55.440 END TEST nvmf_failover 00:24:55.440 ************************************ 00:24:55.440 18:40:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:55.440 18:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:55.440 18:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:55.440 18:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.440 ************************************ 00:24:55.440 START TEST nvmf_host_discovery 00:24:55.440 ************************************ 00:24:55.440 18:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:55.440 * Looking for test storage... 00:24:55.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:55.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.440 --rc genhtml_branch_coverage=1 00:24:55.440 --rc genhtml_function_coverage=1 00:24:55.440 --rc genhtml_legend=1 00:24:55.440 --rc geninfo_all_blocks=1 00:24:55.440 --rc geninfo_unexecuted_blocks=1 00:24:55.440 00:24:55.440 ' 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:55.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.440 --rc genhtml_branch_coverage=1 00:24:55.440 --rc genhtml_function_coverage=1 00:24:55.440 --rc genhtml_legend=1 00:24:55.440 --rc geninfo_all_blocks=1 00:24:55.440 --rc geninfo_unexecuted_blocks=1 00:24:55.440 00:24:55.440 ' 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:55.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.440 --rc genhtml_branch_coverage=1 00:24:55.440 --rc genhtml_function_coverage=1 00:24:55.440 --rc genhtml_legend=1 00:24:55.440 --rc geninfo_all_blocks=1 00:24:55.440 --rc geninfo_unexecuted_blocks=1 00:24:55.440 00:24:55.440 ' 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:55.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.440 --rc genhtml_branch_coverage=1 00:24:55.440 --rc genhtml_function_coverage=1 00:24:55.440 --rc genhtml_legend=1 00:24:55.440 --rc geninfo_all_blocks=1 00:24:55.440 --rc geninfo_unexecuted_blocks=1 00:24:55.440 00:24:55.440 ' 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.440 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:55.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:55.441 18:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:03.572 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:03.573 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:03.573 Found net devices under 0000:31:00.0: cvl_0_0 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:03.573 Found net devices under 0000:31:00.1: cvl_0_1 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:03.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:25:03.573 00:25:03.573 --- 10.0.0.2 ping statistics --- 00:25:03.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.573 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:25:03.573 00:25:03.573 --- 10.0.0.1 ping statistics --- 00:25:03.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.573 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1344519 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1344519 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1344519 ']' 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:03.573 18:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.573 [2024-10-08 18:40:56.790441] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:25:03.573 [2024-10-08 18:40:56.790503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.573 [2024-10-08 18:40:56.880509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.573 [2024-10-08 18:40:56.971716] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.573 [2024-10-08 18:40:56.971776] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.573 [2024-10-08 18:40:56.971786] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.573 [2024-10-08 18:40:56.971793] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.573 [2024-10-08 18:40:56.971799] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.573 [2024-10-08 18:40:56.972602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.573 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:03.573 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:03.573 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:03.573 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:03.573 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.833 [2024-10-08 18:40:57.652758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.833 [2024-10-08 18:40:57.664939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.833 null0 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.833 null1 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1344559 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1344559 /tmp/host.sock 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1344559 ']' 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:03.833 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:03.833 18:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.833 [2024-10-08 18:40:57.752809] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:25:03.833 [2024-10-08 18:40:57.752858] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344559 ] 00:25:03.833 [2024-10-08 18:40:57.813256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.833 [2024-10-08 18:40:57.877736] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:04.772 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:04.773 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.032 [2024-10-08 18:40:58.880207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.032 18:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.032 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.032 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:05.032 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:05.032 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:05.032 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:05.032 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:05.032 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.032 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.032 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.032 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.033 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.292 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:05.292 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:05.552 [2024-10-08 18:40:59.603188] bdev_nvme.c:7256:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:05.552 [2024-10-08 18:40:59.603223] bdev_nvme.c:7342:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:05.552 [2024-10-08 18:40:59.603239] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:05.812 [2024-10-08 18:40:59.691495] bdev_nvme.c:7185:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:05.812 [2024-10-08 18:40:59.753677] bdev_nvme.c:7075:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:05.812 [2024-10-08 18:40:59.753714] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:06.072 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.072 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:06.072 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:06.073 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.073 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.073 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.073 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.073 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.073 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.073 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.333 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.593 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.594 [2024-10-08 18:41:00.585193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:06.594 [2024-10-08 18:41:00.585772] bdev_nvme.c:7238:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:06.594 [2024-10-08 18:41:00.585814] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:06.594 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:06.854 [2024-10-08 18:41:00.714502] bdev_nvme.c:7180:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:06.854 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:07.115 [2024-10-08 18:41:01.024322] bdev_nvme.c:7075:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:07.115 [2024-10-08 18:41:01.024352] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:07.115 [2024-10-08 18:41:01.024359] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:08.056 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.057 [2024-10-08 18:41:01.857259] bdev_nvme.c:7238:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:08.057 [2024-10-08 18:41:01.857276] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:08.057 [2024-10-08 18:41:01.865778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-10-08 18:41:01.865797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-10-08 18:41:01.865804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-10-08 18:41:01.865809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-10-08 18:41:01.865815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-10-08 18:41:01.865820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-10-08 18:41:01.865826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-10-08 18:41:01.865831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-10-08 18:41:01.865837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211e50 is same with the state(6) to be set 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:08.057 [2024-10-08 18:41:01.875794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211e50 (9): Bad file descriptor 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.057 [2024-10-08 18:41:01.885827] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.057 [2024-10-08 18:41:01.886247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.057 [2024-10-08 18:41:01.886277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211e50 with addr=10.0.0.2, port=4420 00:25:08.057 [2024-10-08 18:41:01.886286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211e50 is same with the state(6) to be set 00:25:08.057 [2024-10-08 18:41:01.886300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211e50 (9): Bad file descriptor 00:25:08.057 [2024-10-08 18:41:01.886309] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.057 [2024-10-08 18:41:01.886314] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.057 [2024-10-08 18:41:01.886321] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.057 [2024-10-08 18:41:01.886333] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.057 [2024-10-08 18:41:01.895875] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.057 [2024-10-08 18:41:01.896260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.057 [2024-10-08 18:41:01.896290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211e50 with addr=10.0.0.2, port=4420 00:25:08.057 [2024-10-08 18:41:01.896299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211e50 is same with the state(6) to be set 00:25:08.057 [2024-10-08 18:41:01.896314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211e50 (9): Bad file descriptor 00:25:08.057 [2024-10-08 18:41:01.896323] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.057 [2024-10-08 18:41:01.896332] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.057 [2024-10-08 18:41:01.896338] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.057 [2024-10-08 18:41:01.896349] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.057 [2024-10-08 18:41:01.905922] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.057 [2024-10-08 18:41:01.906231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.057 [2024-10-08 18:41:01.906242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211e50 with addr=10.0.0.2, port=4420 00:25:08.057 [2024-10-08 18:41:01.906248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211e50 is same with the state(6) to be set 00:25:08.057 [2024-10-08 18:41:01.906256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211e50 (9): Bad file descriptor 00:25:08.057 [2024-10-08 18:41:01.906264] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.057 [2024-10-08 18:41:01.906268] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.057 [2024-10-08 18:41:01.906274] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.057 [2024-10-08 18:41:01.906282] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:08.057 [2024-10-08 18:41:01.915972] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:08.057 [2024-10-08 18:41:01.916176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.057 [2024-10-08 18:41:01.916186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211e50 with addr=10.0.0.2, port=4420 00:25:08.057 [2024-10-08 18:41:01.916192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211e50 is same with the state(6) to be set 00:25:08.057 [2024-10-08 18:41:01.916200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211e50 (9): Bad file descriptor 00:25:08.057 [2024-10-08 18:41:01.916207] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.057 [2024-10-08 18:41:01.916212] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.057 [2024-10-08 18:41:01.916217] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.057 [2024-10-08 18:41:01.916224] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.057 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.057 [2024-10-08 18:41:01.926021] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.057 [2024-10-08 18:41:01.926355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.057 [2024-10-08 18:41:01.926365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211e50 with addr=10.0.0.2, port=4420 00:25:08.057 [2024-10-08 18:41:01.926370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211e50 is same with the state(6) to be set 00:25:08.057 [2024-10-08 18:41:01.926378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211e50 (9): Bad file descriptor 00:25:08.057 [2024-10-08 18:41:01.926386] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.057 [2024-10-08 18:41:01.926391] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.057 [2024-10-08 18:41:01.926396] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.057 [2024-10-08 18:41:01.926403] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.057 [2024-10-08 18:41:01.936068] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.057 [2024-10-08 18:41:01.936391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.057 [2024-10-08 18:41:01.936400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211e50 with addr=10.0.0.2, port=4420 00:25:08.057 [2024-10-08 18:41:01.936405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211e50 is same with the state(6) to be set 00:25:08.057 [2024-10-08 18:41:01.936412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211e50 (9): Bad file descriptor 00:25:08.057 [2024-10-08 18:41:01.936420] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.057 [2024-10-08 18:41:01.936425] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.057 [2024-10-08 18:41:01.936430] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.057 [2024-10-08 18:41:01.936437] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.057 [2024-10-08 18:41:01.944524] bdev_nvme.c:7043:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:08.058 [2024-10-08 18:41:01.944537] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:08.058 18:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:08.058 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.318 18:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.257 [2024-10-08 18:41:03.300938] bdev_nvme.c:7256:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:09.257 [2024-10-08 18:41:03.300952] bdev_nvme.c:7342:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:09.257 [2024-10-08 18:41:03.300960] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.517 [2024-10-08 18:41:03.390228] bdev_nvme.c:7185:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:09.517 [2024-10-08 18:41:03.454877] bdev_nvme.c:7075:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:09.517 [2024-10-08 18:41:03.454900] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.517 request: 00:25:09.517 { 00:25:09.517 "name": "nvme", 00:25:09.517 "trtype": "tcp", 00:25:09.517 "traddr": "10.0.0.2", 00:25:09.517 "adrfam": "ipv4", 00:25:09.517 "trsvcid": "8009", 00:25:09.517 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:09.517 "wait_for_attach": true, 00:25:09.517 "method": "bdev_nvme_start_discovery", 00:25:09.517 "req_id": 1 00:25:09.517 } 00:25:09.517 Got JSON-RPC error response 00:25:09.517 response: 00:25:09.517 { 00:25:09.517 "code": -17, 00:25:09.517 "message": "File exists" 00:25:09.517 } 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.517 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.778 request: 00:25:09.778 { 00:25:09.778 "name": "nvme_second", 00:25:09.778 "trtype": "tcp", 00:25:09.778 "traddr": "10.0.0.2", 00:25:09.778 "adrfam": "ipv4", 00:25:09.778 "trsvcid": "8009", 00:25:09.778 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:09.778 "wait_for_attach": true, 00:25:09.778 "method": "bdev_nvme_start_discovery", 00:25:09.778 "req_id": 1 00:25:09.778 } 00:25:09.778 Got JSON-RPC error response 00:25:09.778 response: 00:25:09.778 { 00:25:09.778 "code": -17, 00:25:09.778 "message": "File exists" 00:25:09.778 } 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.778 18:41:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.716 [2024-10-08 18:41:04.715739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.716 [2024-10-08 18:41:04.715762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211b60 with addr=10.0.0.2, port=8010 00:25:10.716 [2024-10-08 18:41:04.715772] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:10.716 [2024-10-08 18:41:04.715777] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:10.716 [2024-10-08 18:41:04.715783] bdev_nvme.c:7324:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:12.096 [2024-10-08 18:41:05.718077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.096 [2024-10-08 18:41:05.718095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211b60 with addr=10.0.0.2, port=8010 00:25:12.096 [2024-10-08 18:41:05.718103] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:12.096 [2024-10-08 18:41:05.718108] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:12.096 [2024-10-08 18:41:05.718117] bdev_nvme.c:7324:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:12.665 [2024-10-08 18:41:06.720080] bdev_nvme.c:7299:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:12.665 request: 00:25:12.665 { 00:25:12.665 "name": "nvme_second", 00:25:12.665 "trtype": "tcp", 00:25:12.665 "traddr": "10.0.0.2", 00:25:12.925 "adrfam": "ipv4", 00:25:12.925 "trsvcid": "8010", 00:25:12.925 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:12.925 "wait_for_attach": false, 00:25:12.925 "attach_timeout_ms": 3000, 00:25:12.925 "method": "bdev_nvme_start_discovery", 00:25:12.925 "req_id": 1 00:25:12.925 } 00:25:12.925 Got JSON-RPC error response 00:25:12.925 response: 00:25:12.925 { 00:25:12.925 "code": -110, 00:25:12.925 "message": "Connection timed out" 00:25:12.925 } 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1344559 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.925 rmmod nvme_tcp 00:25:12.925 rmmod nvme_fabrics 00:25:12.925 rmmod nvme_keyring 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1344519 ']' 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1344519 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1344519 ']' 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1344519 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1344519 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1344519' 00:25:12.925 killing process with pid 1344519 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1344519 00:25:12.925 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1344519 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.185 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.095 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.095 00:25:15.095 real 0m20.138s 00:25:15.095 user 0m23.252s 00:25:15.095 sys 0m7.142s 00:25:15.095 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:15.095 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.095 ************************************ 00:25:15.095 END TEST nvmf_host_discovery 00:25:15.095 ************************************ 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.356 ************************************ 00:25:15.356 START TEST nvmf_host_multipath_status 00:25:15.356 ************************************ 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:15.356 * Looking for test storage... 00:25:15.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:15.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.356 --rc genhtml_branch_coverage=1 00:25:15.356 --rc genhtml_function_coverage=1 00:25:15.356 --rc genhtml_legend=1 00:25:15.356 --rc geninfo_all_blocks=1 00:25:15.356 --rc geninfo_unexecuted_blocks=1 00:25:15.356 00:25:15.356 ' 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:15.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.356 --rc genhtml_branch_coverage=1 00:25:15.356 --rc genhtml_function_coverage=1 00:25:15.356 --rc genhtml_legend=1 00:25:15.356 --rc geninfo_all_blocks=1 00:25:15.356 --rc geninfo_unexecuted_blocks=1 00:25:15.356 00:25:15.356 ' 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:15.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.356 --rc genhtml_branch_coverage=1 00:25:15.356 --rc genhtml_function_coverage=1 00:25:15.356 --rc genhtml_legend=1 00:25:15.356 --rc geninfo_all_blocks=1 00:25:15.356 --rc geninfo_unexecuted_blocks=1 00:25:15.356 00:25:15.356 ' 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:15.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.356 --rc genhtml_branch_coverage=1 00:25:15.356 --rc genhtml_function_coverage=1 00:25:15.356 --rc genhtml_legend=1 00:25:15.356 --rc geninfo_all_blocks=1 00:25:15.356 --rc geninfo_unexecuted_blocks=1 00:25:15.356 00:25:15.356 ' 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.356 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.617 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.618 18:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:23.758 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:23.758 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.758 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:23.759 Found net devices under 0000:31:00.0: cvl_0_0 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:23.759 Found net devices under 0000:31:00.1: cvl_0_1 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:23.759 18:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:23.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:25:23.759 00:25:23.759 --- 10.0.0.2 ping statistics --- 00:25:23.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.759 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:25:23.759 00:25:23.759 --- 10.0.0.1 ping statistics --- 00:25:23.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.759 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1350797 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1350797 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1350797 ']' 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:23.759 18:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:23.759 [2024-10-08 18:41:17.219509] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:25:23.759 [2024-10-08 18:41:17.219577] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.759 [2024-10-08 18:41:17.309851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:23.759 [2024-10-08 18:41:17.405408] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.759 [2024-10-08 18:41:17.405470] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.759 [2024-10-08 18:41:17.405479] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.759 [2024-10-08 18:41:17.405486] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.759 [2024-10-08 18:41:17.405492] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.759 [2024-10-08 18:41:17.406628] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.759 [2024-10-08 18:41:17.406631] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.020 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.020 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:24.020 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:24.020 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:24.020 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:24.281 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.281 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1350797 00:25:24.281 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:24.281 [2024-10-08 18:41:18.257624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.281 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:24.542 Malloc0 00:25:24.542 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:24.802 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:24.802 18:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.063 [2024-10-08 18:41:18.999781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.063 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:25.325 [2024-10-08 18:41:19.168191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:25.325 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:25.325 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1351163 00:25:25.325 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:25.325 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1351163 /var/tmp/bdevperf.sock 00:25:25.325 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1351163 ']' 00:25:25.325 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.325 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.325 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.325 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.325 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:25.587 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:25.587 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:25.587 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:25.848 18:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:26.156 Nvme0n1 00:25:26.497 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:26.497 Nvme0n1 00:25:26.497 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:26.497 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:29.046 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:29.046 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:29.046 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:29.046 18:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:29.987 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:29.987 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:29.987 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.987 18:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:30.248 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.248 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:30.248 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.248 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:30.248 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.248 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:30.248 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.248 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.509 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.509 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.509 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.509 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.770 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.770 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.770 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.770 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.770 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.770 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:30.770 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.770 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:31.030 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.031 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:31.031 18:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:31.291 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:31.291 18:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.676 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.937 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.937 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.937 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.937 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:33.198 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.198 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:33.198 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.198 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.198 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.198 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:33.198 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.198 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.458 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.458 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:33.458 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:33.718 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:33.978 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:34.924 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:34.924 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:34.924 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.924 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.924 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.924 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:35.185 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.185 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.185 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.185 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.185 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.185 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.445 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.445 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.445 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.445 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.705 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.705 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.705 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.705 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.705 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.705 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.705 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.705 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:35.966 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.966 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:35.966 18:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:36.228 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:36.228 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.610 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:37.870 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.870 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.870 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.870 18:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.130 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.130 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:38.130 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.130 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.389 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.389 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:38.389 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.389 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.389 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.389 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:38.389 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:38.649 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:38.909 18:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:39.848 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:39.848 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:39.848 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.848 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.108 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.108 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:40.108 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.108 18:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.108 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.108 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.108 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.108 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.367 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.368 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.368 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.368 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.628 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.628 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:40.628 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.628 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.888 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.888 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:40.888 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.888 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.888 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.888 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:40.888 18:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:41.149 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:41.409 18:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:42.348 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:42.348 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:42.348 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.348 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.608 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.608 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:42.608 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.608 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:42.608 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.608 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:42.608 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.608 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.868 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.868 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.868 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.868 18:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.129 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.129 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:43.129 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.129 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.389 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.389 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:43.389 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.389 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.389 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.389 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:43.651 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:43.651 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:43.911 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:43.911 18:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:45.299 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:45.299 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:45.299 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.299 18:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:45.299 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.299 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:45.299 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.299 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.299 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.299 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.299 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.299 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.559 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.559 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.559 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.559 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.819 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.819 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.819 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.819 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.819 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.819 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.819 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.819 18:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.079 18:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.079 18:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:46.079 18:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:46.340 18:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:46.340 18:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.723 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.983 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.983 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.983 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.983 18:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.243 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.243 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:48.243 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.243 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.243 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.243 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:48.243 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.243 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.503 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.503 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:48.503 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:48.762 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:49.021 18:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:49.959 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:49.959 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:49.959 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.959 18:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.218 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.218 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:50.218 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.218 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.218 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.218 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.218 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.218 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.478 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.478 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.478 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.478 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.738 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.738 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.738 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.738 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.738 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.738 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:50.738 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.738 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.997 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.997 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:50.997 18:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:51.256 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:51.516 18:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:52.455 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:52.455 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:52.455 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.455 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.455 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.455 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:52.455 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.455 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.715 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.715 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.715 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.715 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.975 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.975 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.975 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.975 18:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.235 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.235 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.235 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.235 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.235 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.235 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:53.235 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.235 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1351163 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1351163 ']' 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1351163 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1351163 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1351163' 00:25:53.496 killing process with pid 1351163 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1351163 00:25:53.496 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1351163 00:25:53.496 { 00:25:53.496 "results": [ 00:25:53.496 { 00:25:53.496 "job": "Nvme0n1", 00:25:53.496 "core_mask": "0x4", 00:25:53.496 "workload": "verify", 00:25:53.496 "status": "terminated", 00:25:53.496 "verify_range": { 00:25:53.496 "start": 0, 00:25:53.496 "length": 16384 00:25:53.496 }, 00:25:53.496 "queue_depth": 128, 00:25:53.496 "io_size": 4096, 00:25:53.496 "runtime": 26.862479, 00:25:53.496 "iops": 12309.10222396079, 00:25:53.496 "mibps": 48.082430562346836, 00:25:53.496 "io_failed": 0, 00:25:53.496 "io_timeout": 0, 00:25:53.496 "avg_latency_us": 10379.13664609122, 00:25:53.496 "min_latency_us": 269.6533333333333, 00:25:53.496 "max_latency_us": 3019898.88 00:25:53.496 } 00:25:53.496 ], 00:25:53.496 "core_count": 1 00:25:53.496 } 00:25:53.761 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1351163 00:25:53.761 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:53.761 [2024-10-08 18:41:19.222608] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:25:53.761 [2024-10-08 18:41:19.222682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351163 ] 00:25:53.761 [2024-10-08 18:41:19.279769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.761 [2024-10-08 18:41:19.362056] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.761 Running I/O for 90 seconds... 00:25:53.761 10331.00 IOPS, 40.36 MiB/s [2024-10-08T16:41:47.818Z] 11681.00 IOPS, 45.63 MiB/s [2024-10-08T16:41:47.818Z] 12245.00 IOPS, 47.83 MiB/s [2024-10-08T16:41:47.818Z] 12459.75 IOPS, 48.67 MiB/s [2024-10-08T16:41:47.818Z] 12575.20 IOPS, 49.12 MiB/s [2024-10-08T16:41:47.818Z] 12621.83 IOPS, 49.30 MiB/s [2024-10-08T16:41:47.818Z] 12675.86 IOPS, 49.52 MiB/s [2024-10-08T16:41:47.818Z] 12727.25 IOPS, 49.72 MiB/s [2024-10-08T16:41:47.818Z] 12792.33 IOPS, 49.97 MiB/s [2024-10-08T16:41:47.818Z] 12841.90 IOPS, 50.16 MiB/s [2024-10-08T16:41:47.818Z] 12867.00 IOPS, 50.26 MiB/s [2024-10-08T16:41:47.818Z] [2024-10-08 18:41:32.559871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.559906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.559937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.761 [2024-10-08 18:41:32.559944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.559955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.761 [2024-10-08 18:41:32.559961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.559971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.761 [2024-10-08 18:41:32.559987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.559998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.761 [2024-10-08 18:41:32.560003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.761 [2024-10-08 18:41:32.560019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.761 [2024-10-08 18:41:32.560035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.761 [2024-10-08 18:41:32.560050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.761 [2024-10-08 18:41:32.560386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.761 [2024-10-08 18:41:32.560391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.560983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.560990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.561002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.561007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.561019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.561024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.561037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.561041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.561054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.561059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.561071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.561076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.561089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.561094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:53.762 [2024-10-08 18:41:32.561107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.762 [2024-10-08 18:41:32.561113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.763 [2024-10-08 18:41:32.561963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:53.763 [2024-10-08 18:41:32.561981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.763 [2024-10-08 18:41:32.561986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:32.562506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.764 [2024-10-08 18:41:32.562511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.764 12833.25 IOPS, 50.13 MiB/s [2024-10-08T16:41:47.821Z] 11846.08 IOPS, 46.27 MiB/s [2024-10-08T16:41:47.821Z] 10999.93 IOPS, 42.97 MiB/s [2024-10-08T16:41:47.821Z] 10310.73 IOPS, 40.28 MiB/s [2024-10-08T16:41:47.821Z] 10470.38 IOPS, 40.90 MiB/s [2024-10-08T16:41:47.821Z] 10618.59 IOPS, 41.48 MiB/s [2024-10-08T16:41:47.821Z] 10977.39 IOPS, 42.88 MiB/s [2024-10-08T16:41:47.821Z] 11308.05 IOPS, 44.17 MiB/s [2024-10-08T16:41:47.821Z] 11539.80 IOPS, 45.08 MiB/s [2024-10-08T16:41:47.821Z] 11600.38 IOPS, 45.31 MiB/s [2024-10-08T16:41:47.821Z] 11649.95 IOPS, 45.51 MiB/s [2024-10-08T16:41:47.821Z] 11871.61 IOPS, 46.37 MiB/s [2024-10-08T16:41:47.821Z] 12100.00 IOPS, 47.27 MiB/s [2024-10-08T16:41:47.821Z] [2024-10-08 18:41:45.297814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.764 [2024-10-08 18:41:45.297849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:45.297889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.764 [2024-10-08 18:41:45.297897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:45.297908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.764 [2024-10-08 18:41:45.297917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:45.297928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.764 [2024-10-08 18:41:45.297938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:45.297951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.764 [2024-10-08 18:41:45.297956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:45.297967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.764 [2024-10-08 18:41:45.297972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:45.297988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.764 [2024-10-08 18:41:45.297993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.764 [2024-10-08 18:41:45.298024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.765 [2024-10-08 18:41:45.298822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.765 [2024-10-08 18:41:45.298832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.765 [2024-10-08 18:41:45.298839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.298850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.298855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.298865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.298870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.298880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.298886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.298896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.298902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.298912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.298920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.298931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.298936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.298947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.766 [2024-10-08 18:41:45.298952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.298962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.298967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.298981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.299003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.299023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.299038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.299055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.299071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.299086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.299101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.766 [2024-10-08 18:41:45.299117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.766 [2024-10-08 18:41:45.299132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.766 [2024-10-08 18:41:45.299147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.766 [2024-10-08 18:41:45.299162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.299178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.766 [2024-10-08 18:41:45.299194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.766 [2024-10-08 18:41:45.299209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.766 [2024-10-08 18:41:45.299225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.766 [2024-10-08 18:41:45.299235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.766 [2024-10-08 18:41:45.299240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.766 12252.84 IOPS, 47.86 MiB/s [2024-10-08T16:41:47.823Z] 12289.27 IOPS, 48.00 MiB/s [2024-10-08T16:41:47.823Z] Received shutdown signal, test time was about 26.863087 seconds 00:25:53.766 00:25:53.766 Latency(us) 00:25:53.766 [2024-10-08T16:41:47.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.766 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:53.766 Verification LBA range: start 0x0 length 0x4000 00:25:53.766 Nvme0n1 : 26.86 12309.10 48.08 0.00 0.00 10379.14 269.65 3019898.88 00:25:53.766 [2024-10-08T16:41:47.823Z] =================================================================================================================== 00:25:53.766 [2024-10-08T16:41:47.823Z] Total : 12309.10 48.08 0.00 0.00 10379.14 269.65 3019898.88 00:25:53.766 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.766 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:53.766 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:54.027 rmmod nvme_tcp 00:25:54.027 rmmod nvme_fabrics 00:25:54.027 rmmod nvme_keyring 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1350797 ']' 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1350797 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1350797 ']' 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1350797 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1350797 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1350797' 00:25:54.027 killing process with pid 1350797 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1350797 00:25:54.027 18:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1350797 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.287 18:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.195 18:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:56.195 00:25:56.195 real 0m40.975s 00:25:56.195 user 1m44.254s 00:25:56.195 sys 0m12.007s 00:25:56.195 18:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:56.195 18:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.195 ************************************ 00:25:56.195 END TEST nvmf_host_multipath_status 00:25:56.195 ************************************ 00:25:56.195 18:41:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:56.195 18:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:56.195 18:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:56.195 18:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.456 ************************************ 00:25:56.456 START TEST nvmf_discovery_remove_ifc 00:25:56.456 ************************************ 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:56.456 * Looking for test storage... 00:25:56.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:56.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.456 --rc genhtml_branch_coverage=1 00:25:56.456 --rc genhtml_function_coverage=1 00:25:56.456 --rc genhtml_legend=1 00:25:56.456 --rc geninfo_all_blocks=1 00:25:56.456 --rc geninfo_unexecuted_blocks=1 00:25:56.456 00:25:56.456 ' 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:56.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.456 --rc genhtml_branch_coverage=1 00:25:56.456 --rc genhtml_function_coverage=1 00:25:56.456 --rc genhtml_legend=1 00:25:56.456 --rc geninfo_all_blocks=1 00:25:56.456 --rc geninfo_unexecuted_blocks=1 00:25:56.456 00:25:56.456 ' 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:56.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.456 --rc genhtml_branch_coverage=1 00:25:56.456 --rc genhtml_function_coverage=1 00:25:56.456 --rc genhtml_legend=1 00:25:56.456 --rc geninfo_all_blocks=1 00:25:56.456 --rc geninfo_unexecuted_blocks=1 00:25:56.456 00:25:56.456 ' 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:56.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.456 --rc genhtml_branch_coverage=1 00:25:56.456 --rc genhtml_function_coverage=1 00:25:56.456 --rc genhtml_legend=1 00:25:56.456 --rc geninfo_all_blocks=1 00:25:56.456 --rc geninfo_unexecuted_blocks=1 00:25:56.456 00:25:56.456 ' 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:56.456 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:56.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.457 18:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:04.591 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:04.591 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:04.591 Found net devices under 0000:31:00.0: cvl_0_0 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:04.591 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:04.592 Found net devices under 0000:31:00.1: cvl_0_1 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:04.592 18:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:04.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:26:04.592 00:26:04.592 --- 10.0.0.2 ping statistics --- 00:26:04.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.592 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:26:04.592 00:26:04.592 --- 10.0.0.1 ping statistics --- 00:26:04.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.592 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1361131 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1361131 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1361131 ']' 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:04.592 18:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.592 [2024-10-08 18:41:58.227694] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:26:04.592 [2024-10-08 18:41:58.227763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.592 [2024-10-08 18:41:58.316677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.592 [2024-10-08 18:41:58.409919] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.592 [2024-10-08 18:41:58.409988] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.592 [2024-10-08 18:41:58.409997] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.592 [2024-10-08 18:41:58.410005] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.592 [2024-10-08 18:41:58.410010] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.592 [2024-10-08 18:41:58.410838] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.163 [2024-10-08 18:41:59.095256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.163 [2024-10-08 18:41:59.103482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:05.163 null0 00:26:05.163 [2024-10-08 18:41:59.135469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1361454 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1361454 /tmp/host.sock 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1361454 ']' 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:05.163 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:05.163 18:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.163 [2024-10-08 18:41:59.212172] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:26:05.163 [2024-10-08 18:41:59.212233] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1361454 ] 00:26:05.423 [2024-10-08 18:41:59.293213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.423 [2024-10-08 18:41:59.388732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.993 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:05.993 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:05.993 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:05.993 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:05.993 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.993 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.993 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.993 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:05.993 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.993 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.253 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.253 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:06.253 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.253 18:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.195 [2024-10-08 18:42:01.184200] bdev_nvme.c:7256:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:07.195 [2024-10-08 18:42:01.184243] bdev_nvme.c:7342:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:07.195 [2024-10-08 18:42:01.184262] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:07.456 [2024-10-08 18:42:01.272528] bdev_nvme.c:7185:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:07.456 [2024-10-08 18:42:01.458638] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:07.456 [2024-10-08 18:42:01.458712] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:07.457 [2024-10-08 18:42:01.458737] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:07.457 [2024-10-08 18:42:01.458756] bdev_nvme.c:7075:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:07.457 [2024-10-08 18:42:01.458785] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:07.457 [2024-10-08 18:42:01.463345] bdev_nvme.c:1739:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16d62f0 was disconnected and freed. delete nvme_qpair. 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:07.457 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:07.718 18:42:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:08.660 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.660 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.660 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.660 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.660 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.660 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.660 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.919 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.920 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:08.920 18:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.860 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.860 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.860 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.860 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.860 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.860 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.860 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.860 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.860 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:09.860 18:42:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:10.799 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:10.799 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.799 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:10.799 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.799 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:10.799 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.799 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.799 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.799 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:10.799 18:42:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:12.180 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.180 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.180 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.180 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.180 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.180 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.180 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.180 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.180 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:12.180 18:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.119 [2024-10-08 18:42:06.899104] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:13.119 [2024-10-08 18:42:06.899147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.119 [2024-10-08 18:42:06.899156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.119 [2024-10-08 18:42:06.899163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.119 [2024-10-08 18:42:06.899168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.119 [2024-10-08 18:42:06.899174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.119 [2024-10-08 18:42:06.899180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.119 [2024-10-08 18:42:06.899185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.119 [2024-10-08 18:42:06.899190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.119 [2024-10-08 18:42:06.899196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.119 [2024-10-08 18:42:06.899201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.119 [2024-10-08 18:42:06.899206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2d40 is same with the state(6) to be set 00:26:13.119 [2024-10-08 18:42:06.909126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b2d40 (9): Bad file descriptor 00:26:13.119 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.119 [2024-10-08 18:42:06.919160] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:13.119 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.119 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.119 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.119 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.119 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.119 18:42:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.183 [2024-10-08 18:42:07.959062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:14.183 [2024-10-08 18:42:07.959166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b2d40 with addr=10.0.0.2, port=4420 00:26:14.183 [2024-10-08 18:42:07.959201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2d40 is same with the state(6) to be set 00:26:14.183 [2024-10-08 18:42:07.959264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b2d40 (9): Bad file descriptor 00:26:14.183 [2024-10-08 18:42:07.960382] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:14.183 [2024-10-08 18:42:07.960454] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:14.183 [2024-10-08 18:42:07.960477] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:14.183 [2024-10-08 18:42:07.960500] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:14.183 [2024-10-08 18:42:07.960566] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.183 [2024-10-08 18:42:07.960603] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:14.183 18:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.183 18:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:14.183 18:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.125 [2024-10-08 18:42:08.963001] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:15.125 [2024-10-08 18:42:08.963019] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:15.125 [2024-10-08 18:42:08.963025] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:15.125 [2024-10-08 18:42:08.963031] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:15.125 [2024-10-08 18:42:08.963040] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:15.125 [2024-10-08 18:42:08.963055] bdev_nvme.c:7007:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:15.125 [2024-10-08 18:42:08.963072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.125 [2024-10-08 18:42:08.963080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.125 [2024-10-08 18:42:08.963087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.125 [2024-10-08 18:42:08.963093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.125 [2024-10-08 18:42:08.963098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.125 [2024-10-08 18:42:08.963103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.125 [2024-10-08 18:42:08.963109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.125 [2024-10-08 18:42:08.963114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.126 [2024-10-08 18:42:08.963120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.126 [2024-10-08 18:42:08.963125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.126 [2024-10-08 18:42:08.963130] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:15.126 [2024-10-08 18:42:08.963527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a2480 (9): Bad file descriptor 00:26:15.126 [2024-10-08 18:42:08.964537] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:15.126 [2024-10-08 18:42:08.964546] nvme_ctrlr.c:1233:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:15.126 18:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.126 18:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.126 18:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.126 18:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.126 18:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.126 18:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.126 18:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.126 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.386 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:15.386 18:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.327 18:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.327 18:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.327 18:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.327 18:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.327 18:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.327 18:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.327 18:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.327 18:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.327 18:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:16.327 18:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.268 [2024-10-08 18:42:11.019177] bdev_nvme.c:7256:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:17.268 [2024-10-08 18:42:11.019193] bdev_nvme.c:7342:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:17.268 [2024-10-08 18:42:11.019203] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:17.268 [2024-10-08 18:42:11.105442] bdev_nvme.c:7185:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:17.268 [2024-10-08 18:42:11.210674] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:17.269 [2024-10-08 18:42:11.210706] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:17.269 [2024-10-08 18:42:11.210721] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:17.269 [2024-10-08 18:42:11.210732] bdev_nvme.c:7075:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:17.269 [2024-10-08 18:42:11.210738] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:17.269 [2024-10-08 18:42:11.217439] bdev_nvme.c:1739:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16bd290 was disconnected and freed. delete nvme_qpair. 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1361454 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1361454 ']' 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1361454 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:17.269 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1361454 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1361454' 00:26:17.529 killing process with pid 1361454 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1361454 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1361454 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:17.529 rmmod nvme_tcp 00:26:17.529 rmmod nvme_fabrics 00:26:17.529 rmmod nvme_keyring 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1361131 ']' 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1361131 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1361131 ']' 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1361131 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:17.529 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1361131 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1361131' 00:26:17.790 killing process with pid 1361131 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1361131 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1361131 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.790 18:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:20.333 00:26:20.333 real 0m23.560s 00:26:20.333 user 0m27.412s 00:26:20.333 sys 0m7.231s 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.333 ************************************ 00:26:20.333 END TEST nvmf_discovery_remove_ifc 00:26:20.333 ************************************ 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.333 ************************************ 00:26:20.333 START TEST nvmf_identify_kernel_target 00:26:20.333 ************************************ 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:20.333 * Looking for test storage... 00:26:20.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:26:20.333 18:42:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:20.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.333 --rc genhtml_branch_coverage=1 00:26:20.333 --rc genhtml_function_coverage=1 00:26:20.333 --rc genhtml_legend=1 00:26:20.333 --rc geninfo_all_blocks=1 00:26:20.333 --rc geninfo_unexecuted_blocks=1 00:26:20.333 00:26:20.333 ' 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:20.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.333 --rc genhtml_branch_coverage=1 00:26:20.333 --rc genhtml_function_coverage=1 00:26:20.333 --rc genhtml_legend=1 00:26:20.333 --rc geninfo_all_blocks=1 00:26:20.333 --rc geninfo_unexecuted_blocks=1 00:26:20.333 00:26:20.333 ' 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:20.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.333 --rc genhtml_branch_coverage=1 00:26:20.333 --rc genhtml_function_coverage=1 00:26:20.333 --rc genhtml_legend=1 00:26:20.333 --rc geninfo_all_blocks=1 00:26:20.333 --rc geninfo_unexecuted_blocks=1 00:26:20.333 00:26:20.333 ' 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:20.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.333 --rc genhtml_branch_coverage=1 00:26:20.333 --rc genhtml_function_coverage=1 00:26:20.333 --rc genhtml_legend=1 00:26:20.333 --rc geninfo_all_blocks=1 00:26:20.333 --rc geninfo_unexecuted_blocks=1 00:26:20.333 00:26:20.333 ' 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.333 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:20.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:20.334 18:42:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:28.474 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.474 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.474 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.474 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.474 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.474 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.474 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.474 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.474 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:28.475 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:28.475 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:28.475 Found net devices under 0000:31:00.0: cvl_0_0 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:28.475 Found net devices under 0000:31:00.1: cvl_0_1 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:28.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:26:28.475 00:26:28.475 --- 10.0.0.2 ping statistics --- 00:26:28.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.475 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:26:28.475 00:26:28.475 --- 10.0.0.1 ping statistics --- 00:26:28.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.475 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.475 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:28.476 18:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:31.777 Waiting for block devices as requested 00:26:31.777 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:31.777 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:31.777 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:31.777 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:31.777 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:31.777 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:32.037 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:32.037 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:32.037 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:32.297 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:32.297 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:32.558 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:32.558 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:32.558 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:32.832 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:32.832 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:32.832 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:33.098 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:33.098 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:33.098 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:33.098 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:33.098 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:33.098 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:33.098 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:33.098 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:33.098 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:33.359 No valid GPT data, bailing 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:26:33.359 00:26:33.359 Discovery Log Number of Records 2, Generation counter 2 00:26:33.359 =====Discovery Log Entry 0====== 00:26:33.359 trtype: tcp 00:26:33.359 adrfam: ipv4 00:26:33.359 subtype: current discovery subsystem 00:26:33.359 treq: not specified, sq flow control disable supported 00:26:33.359 portid: 1 00:26:33.359 trsvcid: 4420 00:26:33.359 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:33.359 traddr: 10.0.0.1 00:26:33.359 eflags: none 00:26:33.359 sectype: none 00:26:33.359 =====Discovery Log Entry 1====== 00:26:33.359 trtype: tcp 00:26:33.359 adrfam: ipv4 00:26:33.359 subtype: nvme subsystem 00:26:33.359 treq: not specified, sq flow control disable supported 00:26:33.359 portid: 1 00:26:33.359 trsvcid: 4420 00:26:33.359 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:33.359 traddr: 10.0.0.1 00:26:33.359 eflags: none 00:26:33.359 sectype: none 00:26:33.359 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:33.359 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:33.359 ===================================================== 00:26:33.359 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:33.359 ===================================================== 00:26:33.359 Controller Capabilities/Features 00:26:33.359 ================================ 00:26:33.359 Vendor ID: 0000 00:26:33.359 Subsystem Vendor ID: 0000 00:26:33.359 Serial Number: c943c27c512be6758dd4 00:26:33.359 Model Number: Linux 00:26:33.359 Firmware Version: 6.8.9-20 00:26:33.359 Recommended Arb Burst: 0 00:26:33.359 IEEE OUI Identifier: 00 00 00 00:26:33.359 Multi-path I/O 00:26:33.359 May have multiple subsystem ports: No 00:26:33.359 May have multiple controllers: No 00:26:33.360 Associated with SR-IOV VF: No 00:26:33.360 Max Data Transfer Size: Unlimited 00:26:33.360 Max Number of Namespaces: 0 00:26:33.360 Max Number of I/O Queues: 1024 00:26:33.360 NVMe Specification Version (VS): 1.3 00:26:33.360 NVMe Specification Version (Identify): 1.3 00:26:33.360 Maximum Queue Entries: 1024 00:26:33.360 Contiguous Queues Required: No 00:26:33.360 Arbitration Mechanisms Supported 00:26:33.360 Weighted Round Robin: Not Supported 00:26:33.360 Vendor Specific: Not Supported 00:26:33.360 Reset Timeout: 7500 ms 00:26:33.360 Doorbell Stride: 4 bytes 00:26:33.360 NVM Subsystem Reset: Not Supported 00:26:33.360 Command Sets Supported 00:26:33.360 NVM Command Set: Supported 00:26:33.360 Boot Partition: Not Supported 00:26:33.360 Memory Page Size Minimum: 4096 bytes 00:26:33.360 Memory Page Size Maximum: 4096 bytes 00:26:33.360 Persistent Memory Region: Not Supported 00:26:33.360 Optional Asynchronous Events Supported 00:26:33.360 Namespace Attribute Notices: Not Supported 00:26:33.360 Firmware Activation Notices: Not Supported 00:26:33.360 ANA Change Notices: Not Supported 00:26:33.360 PLE Aggregate Log Change Notices: Not Supported 00:26:33.360 LBA Status Info Alert Notices: Not Supported 00:26:33.360 EGE Aggregate Log Change Notices: Not Supported 00:26:33.360 Normal NVM Subsystem Shutdown event: Not Supported 00:26:33.360 Zone Descriptor Change Notices: Not Supported 00:26:33.360 Discovery Log Change Notices: Supported 00:26:33.360 Controller Attributes 00:26:33.360 128-bit Host Identifier: Not Supported 00:26:33.360 Non-Operational Permissive Mode: Not Supported 00:26:33.360 NVM Sets: Not Supported 00:26:33.360 Read Recovery Levels: Not Supported 00:26:33.360 Endurance Groups: Not Supported 00:26:33.360 Predictable Latency Mode: Not Supported 00:26:33.360 Traffic Based Keep ALive: Not Supported 00:26:33.360 Namespace Granularity: Not Supported 00:26:33.360 SQ Associations: Not Supported 00:26:33.360 UUID List: Not Supported 00:26:33.360 Multi-Domain Subsystem: Not Supported 00:26:33.360 Fixed Capacity Management: Not Supported 00:26:33.360 Variable Capacity Management: Not Supported 00:26:33.360 Delete Endurance Group: Not Supported 00:26:33.360 Delete NVM Set: Not Supported 00:26:33.360 Extended LBA Formats Supported: Not Supported 00:26:33.360 Flexible Data Placement Supported: Not Supported 00:26:33.360 00:26:33.360 Controller Memory Buffer Support 00:26:33.360 ================================ 00:26:33.360 Supported: No 00:26:33.360 00:26:33.360 Persistent Memory Region Support 00:26:33.360 ================================ 00:26:33.360 Supported: No 00:26:33.360 00:26:33.360 Admin Command Set Attributes 00:26:33.360 ============================ 00:26:33.360 Security Send/Receive: Not Supported 00:26:33.360 Format NVM: Not Supported 00:26:33.360 Firmware Activate/Download: Not Supported 00:26:33.360 Namespace Management: Not Supported 00:26:33.360 Device Self-Test: Not Supported 00:26:33.360 Directives: Not Supported 00:26:33.360 NVMe-MI: Not Supported 00:26:33.360 Virtualization Management: Not Supported 00:26:33.360 Doorbell Buffer Config: Not Supported 00:26:33.360 Get LBA Status Capability: Not Supported 00:26:33.360 Command & Feature Lockdown Capability: Not Supported 00:26:33.360 Abort Command Limit: 1 00:26:33.360 Async Event Request Limit: 1 00:26:33.360 Number of Firmware Slots: N/A 00:26:33.360 Firmware Slot 1 Read-Only: N/A 00:26:33.360 Firmware Activation Without Reset: N/A 00:26:33.360 Multiple Update Detection Support: N/A 00:26:33.360 Firmware Update Granularity: No Information Provided 00:26:33.360 Per-Namespace SMART Log: No 00:26:33.360 Asymmetric Namespace Access Log Page: Not Supported 00:26:33.360 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:33.360 Command Effects Log Page: Not Supported 00:26:33.360 Get Log Page Extended Data: Supported 00:26:33.360 Telemetry Log Pages: Not Supported 00:26:33.360 Persistent Event Log Pages: Not Supported 00:26:33.360 Supported Log Pages Log Page: May Support 00:26:33.360 Commands Supported & Effects Log Page: Not Supported 00:26:33.360 Feature Identifiers & Effects Log Page:May Support 00:26:33.360 NVMe-MI Commands & Effects Log Page: May Support 00:26:33.360 Data Area 4 for Telemetry Log: Not Supported 00:26:33.360 Error Log Page Entries Supported: 1 00:26:33.360 Keep Alive: Not Supported 00:26:33.360 00:26:33.360 NVM Command Set Attributes 00:26:33.360 ========================== 00:26:33.360 Submission Queue Entry Size 00:26:33.360 Max: 1 00:26:33.360 Min: 1 00:26:33.360 Completion Queue Entry Size 00:26:33.360 Max: 1 00:26:33.360 Min: 1 00:26:33.360 Number of Namespaces: 0 00:26:33.360 Compare Command: Not Supported 00:26:33.360 Write Uncorrectable Command: Not Supported 00:26:33.360 Dataset Management Command: Not Supported 00:26:33.360 Write Zeroes Command: Not Supported 00:26:33.360 Set Features Save Field: Not Supported 00:26:33.360 Reservations: Not Supported 00:26:33.360 Timestamp: Not Supported 00:26:33.360 Copy: Not Supported 00:26:33.360 Volatile Write Cache: Not Present 00:26:33.360 Atomic Write Unit (Normal): 1 00:26:33.360 Atomic Write Unit (PFail): 1 00:26:33.360 Atomic Compare & Write Unit: 1 00:26:33.360 Fused Compare & Write: Not Supported 00:26:33.360 Scatter-Gather List 00:26:33.360 SGL Command Set: Supported 00:26:33.360 SGL Keyed: Not Supported 00:26:33.360 SGL Bit Bucket Descriptor: Not Supported 00:26:33.360 SGL Metadata Pointer: Not Supported 00:26:33.360 Oversized SGL: Not Supported 00:26:33.360 SGL Metadata Address: Not Supported 00:26:33.360 SGL Offset: Supported 00:26:33.360 Transport SGL Data Block: Not Supported 00:26:33.360 Replay Protected Memory Block: Not Supported 00:26:33.360 00:26:33.360 Firmware Slot Information 00:26:33.360 ========================= 00:26:33.360 Active slot: 0 00:26:33.360 00:26:33.360 00:26:33.360 Error Log 00:26:33.360 ========= 00:26:33.360 00:26:33.360 Active Namespaces 00:26:33.360 ================= 00:26:33.360 Discovery Log Page 00:26:33.360 ================== 00:26:33.360 Generation Counter: 2 00:26:33.360 Number of Records: 2 00:26:33.360 Record Format: 0 00:26:33.360 00:26:33.360 Discovery Log Entry 0 00:26:33.360 ---------------------- 00:26:33.360 Transport Type: 3 (TCP) 00:26:33.360 Address Family: 1 (IPv4) 00:26:33.360 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:33.360 Entry Flags: 00:26:33.360 Duplicate Returned Information: 0 00:26:33.360 Explicit Persistent Connection Support for Discovery: 0 00:26:33.360 Transport Requirements: 00:26:33.360 Secure Channel: Not Specified 00:26:33.360 Port ID: 1 (0x0001) 00:26:33.360 Controller ID: 65535 (0xffff) 00:26:33.360 Admin Max SQ Size: 32 00:26:33.360 Transport Service Identifier: 4420 00:26:33.360 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:33.360 Transport Address: 10.0.0.1 00:26:33.360 Discovery Log Entry 1 00:26:33.360 ---------------------- 00:26:33.360 Transport Type: 3 (TCP) 00:26:33.360 Address Family: 1 (IPv4) 00:26:33.360 Subsystem Type: 2 (NVM Subsystem) 00:26:33.360 Entry Flags: 00:26:33.360 Duplicate Returned Information: 0 00:26:33.360 Explicit Persistent Connection Support for Discovery: 0 00:26:33.360 Transport Requirements: 00:26:33.360 Secure Channel: Not Specified 00:26:33.360 Port ID: 1 (0x0001) 00:26:33.360 Controller ID: 65535 (0xffff) 00:26:33.360 Admin Max SQ Size: 32 00:26:33.360 Transport Service Identifier: 4420 00:26:33.360 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:33.360 Transport Address: 10.0.0.1 00:26:33.621 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:33.621 get_feature(0x01) failed 00:26:33.621 get_feature(0x02) failed 00:26:33.621 get_feature(0x04) failed 00:26:33.621 ===================================================== 00:26:33.621 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:33.621 ===================================================== 00:26:33.621 Controller Capabilities/Features 00:26:33.621 ================================ 00:26:33.621 Vendor ID: 0000 00:26:33.621 Subsystem Vendor ID: 0000 00:26:33.621 Serial Number: 8e6f16d3912c56c50f50 00:26:33.621 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:33.621 Firmware Version: 6.8.9-20 00:26:33.621 Recommended Arb Burst: 6 00:26:33.621 IEEE OUI Identifier: 00 00 00 00:26:33.621 Multi-path I/O 00:26:33.621 May have multiple subsystem ports: Yes 00:26:33.621 May have multiple controllers: Yes 00:26:33.621 Associated with SR-IOV VF: No 00:26:33.621 Max Data Transfer Size: Unlimited 00:26:33.621 Max Number of Namespaces: 1024 00:26:33.621 Max Number of I/O Queues: 128 00:26:33.621 NVMe Specification Version (VS): 1.3 00:26:33.621 NVMe Specification Version (Identify): 1.3 00:26:33.621 Maximum Queue Entries: 1024 00:26:33.621 Contiguous Queues Required: No 00:26:33.621 Arbitration Mechanisms Supported 00:26:33.621 Weighted Round Robin: Not Supported 00:26:33.621 Vendor Specific: Not Supported 00:26:33.621 Reset Timeout: 7500 ms 00:26:33.621 Doorbell Stride: 4 bytes 00:26:33.622 NVM Subsystem Reset: Not Supported 00:26:33.622 Command Sets Supported 00:26:33.622 NVM Command Set: Supported 00:26:33.622 Boot Partition: Not Supported 00:26:33.622 Memory Page Size Minimum: 4096 bytes 00:26:33.622 Memory Page Size Maximum: 4096 bytes 00:26:33.622 Persistent Memory Region: Not Supported 00:26:33.622 Optional Asynchronous Events Supported 00:26:33.622 Namespace Attribute Notices: Supported 00:26:33.622 Firmware Activation Notices: Not Supported 00:26:33.622 ANA Change Notices: Supported 00:26:33.622 PLE Aggregate Log Change Notices: Not Supported 00:26:33.622 LBA Status Info Alert Notices: Not Supported 00:26:33.622 EGE Aggregate Log Change Notices: Not Supported 00:26:33.622 Normal NVM Subsystem Shutdown event: Not Supported 00:26:33.622 Zone Descriptor Change Notices: Not Supported 00:26:33.622 Discovery Log Change Notices: Not Supported 00:26:33.622 Controller Attributes 00:26:33.622 128-bit Host Identifier: Supported 00:26:33.622 Non-Operational Permissive Mode: Not Supported 00:26:33.622 NVM Sets: Not Supported 00:26:33.622 Read Recovery Levels: Not Supported 00:26:33.622 Endurance Groups: Not Supported 00:26:33.622 Predictable Latency Mode: Not Supported 00:26:33.622 Traffic Based Keep ALive: Supported 00:26:33.622 Namespace Granularity: Not Supported 00:26:33.622 SQ Associations: Not Supported 00:26:33.622 UUID List: Not Supported 00:26:33.622 Multi-Domain Subsystem: Not Supported 00:26:33.622 Fixed Capacity Management: Not Supported 00:26:33.622 Variable Capacity Management: Not Supported 00:26:33.622 Delete Endurance Group: Not Supported 00:26:33.622 Delete NVM Set: Not Supported 00:26:33.622 Extended LBA Formats Supported: Not Supported 00:26:33.622 Flexible Data Placement Supported: Not Supported 00:26:33.622 00:26:33.622 Controller Memory Buffer Support 00:26:33.622 ================================ 00:26:33.622 Supported: No 00:26:33.622 00:26:33.622 Persistent Memory Region Support 00:26:33.622 ================================ 00:26:33.622 Supported: No 00:26:33.622 00:26:33.622 Admin Command Set Attributes 00:26:33.622 ============================ 00:26:33.622 Security Send/Receive: Not Supported 00:26:33.622 Format NVM: Not Supported 00:26:33.622 Firmware Activate/Download: Not Supported 00:26:33.622 Namespace Management: Not Supported 00:26:33.622 Device Self-Test: Not Supported 00:26:33.622 Directives: Not Supported 00:26:33.622 NVMe-MI: Not Supported 00:26:33.622 Virtualization Management: Not Supported 00:26:33.622 Doorbell Buffer Config: Not Supported 00:26:33.622 Get LBA Status Capability: Not Supported 00:26:33.622 Command & Feature Lockdown Capability: Not Supported 00:26:33.622 Abort Command Limit: 4 00:26:33.622 Async Event Request Limit: 4 00:26:33.622 Number of Firmware Slots: N/A 00:26:33.622 Firmware Slot 1 Read-Only: N/A 00:26:33.622 Firmware Activation Without Reset: N/A 00:26:33.622 Multiple Update Detection Support: N/A 00:26:33.622 Firmware Update Granularity: No Information Provided 00:26:33.622 Per-Namespace SMART Log: Yes 00:26:33.622 Asymmetric Namespace Access Log Page: Supported 00:26:33.622 ANA Transition Time : 10 sec 00:26:33.622 00:26:33.622 Asymmetric Namespace Access Capabilities 00:26:33.622 ANA Optimized State : Supported 00:26:33.622 ANA Non-Optimized State : Supported 00:26:33.622 ANA Inaccessible State : Supported 00:26:33.622 ANA Persistent Loss State : Supported 00:26:33.622 ANA Change State : Supported 00:26:33.622 ANAGRPID is not changed : No 00:26:33.622 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:33.622 00:26:33.622 ANA Group Identifier Maximum : 128 00:26:33.622 Number of ANA Group Identifiers : 128 00:26:33.622 Max Number of Allowed Namespaces : 1024 00:26:33.622 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:33.622 Command Effects Log Page: Supported 00:26:33.622 Get Log Page Extended Data: Supported 00:26:33.622 Telemetry Log Pages: Not Supported 00:26:33.622 Persistent Event Log Pages: Not Supported 00:26:33.622 Supported Log Pages Log Page: May Support 00:26:33.622 Commands Supported & Effects Log Page: Not Supported 00:26:33.622 Feature Identifiers & Effects Log Page:May Support 00:26:33.622 NVMe-MI Commands & Effects Log Page: May Support 00:26:33.622 Data Area 4 for Telemetry Log: Not Supported 00:26:33.622 Error Log Page Entries Supported: 128 00:26:33.622 Keep Alive: Supported 00:26:33.622 Keep Alive Granularity: 1000 ms 00:26:33.622 00:26:33.622 NVM Command Set Attributes 00:26:33.622 ========================== 00:26:33.622 Submission Queue Entry Size 00:26:33.622 Max: 64 00:26:33.622 Min: 64 00:26:33.622 Completion Queue Entry Size 00:26:33.622 Max: 16 00:26:33.622 Min: 16 00:26:33.622 Number of Namespaces: 1024 00:26:33.622 Compare Command: Not Supported 00:26:33.622 Write Uncorrectable Command: Not Supported 00:26:33.622 Dataset Management Command: Supported 00:26:33.622 Write Zeroes Command: Supported 00:26:33.622 Set Features Save Field: Not Supported 00:26:33.622 Reservations: Not Supported 00:26:33.622 Timestamp: Not Supported 00:26:33.622 Copy: Not Supported 00:26:33.622 Volatile Write Cache: Present 00:26:33.622 Atomic Write Unit (Normal): 1 00:26:33.622 Atomic Write Unit (PFail): 1 00:26:33.622 Atomic Compare & Write Unit: 1 00:26:33.622 Fused Compare & Write: Not Supported 00:26:33.622 Scatter-Gather List 00:26:33.622 SGL Command Set: Supported 00:26:33.622 SGL Keyed: Not Supported 00:26:33.622 SGL Bit Bucket Descriptor: Not Supported 00:26:33.622 SGL Metadata Pointer: Not Supported 00:26:33.622 Oversized SGL: Not Supported 00:26:33.622 SGL Metadata Address: Not Supported 00:26:33.622 SGL Offset: Supported 00:26:33.622 Transport SGL Data Block: Not Supported 00:26:33.622 Replay Protected Memory Block: Not Supported 00:26:33.622 00:26:33.622 Firmware Slot Information 00:26:33.622 ========================= 00:26:33.622 Active slot: 0 00:26:33.622 00:26:33.622 Asymmetric Namespace Access 00:26:33.622 =========================== 00:26:33.622 Change Count : 0 00:26:33.622 Number of ANA Group Descriptors : 1 00:26:33.622 ANA Group Descriptor : 0 00:26:33.622 ANA Group ID : 1 00:26:33.622 Number of NSID Values : 1 00:26:33.622 Change Count : 0 00:26:33.622 ANA State : 1 00:26:33.622 Namespace Identifier : 1 00:26:33.622 00:26:33.622 Commands Supported and Effects 00:26:33.622 ============================== 00:26:33.622 Admin Commands 00:26:33.622 -------------- 00:26:33.622 Get Log Page (02h): Supported 00:26:33.622 Identify (06h): Supported 00:26:33.622 Abort (08h): Supported 00:26:33.622 Set Features (09h): Supported 00:26:33.622 Get Features (0Ah): Supported 00:26:33.622 Asynchronous Event Request (0Ch): Supported 00:26:33.622 Keep Alive (18h): Supported 00:26:33.622 I/O Commands 00:26:33.622 ------------ 00:26:33.622 Flush (00h): Supported 00:26:33.622 Write (01h): Supported LBA-Change 00:26:33.622 Read (02h): Supported 00:26:33.622 Write Zeroes (08h): Supported LBA-Change 00:26:33.622 Dataset Management (09h): Supported 00:26:33.622 00:26:33.622 Error Log 00:26:33.622 ========= 00:26:33.622 Entry: 0 00:26:33.622 Error Count: 0x3 00:26:33.622 Submission Queue Id: 0x0 00:26:33.622 Command Id: 0x5 00:26:33.622 Phase Bit: 0 00:26:33.622 Status Code: 0x2 00:26:33.622 Status Code Type: 0x0 00:26:33.622 Do Not Retry: 1 00:26:33.622 Error Location: 0x28 00:26:33.622 LBA: 0x0 00:26:33.622 Namespace: 0x0 00:26:33.622 Vendor Log Page: 0x0 00:26:33.622 ----------- 00:26:33.622 Entry: 1 00:26:33.622 Error Count: 0x2 00:26:33.622 Submission Queue Id: 0x0 00:26:33.622 Command Id: 0x5 00:26:33.622 Phase Bit: 0 00:26:33.622 Status Code: 0x2 00:26:33.622 Status Code Type: 0x0 00:26:33.622 Do Not Retry: 1 00:26:33.622 Error Location: 0x28 00:26:33.622 LBA: 0x0 00:26:33.622 Namespace: 0x0 00:26:33.622 Vendor Log Page: 0x0 00:26:33.622 ----------- 00:26:33.622 Entry: 2 00:26:33.622 Error Count: 0x1 00:26:33.622 Submission Queue Id: 0x0 00:26:33.622 Command Id: 0x4 00:26:33.622 Phase Bit: 0 00:26:33.622 Status Code: 0x2 00:26:33.622 Status Code Type: 0x0 00:26:33.622 Do Not Retry: 1 00:26:33.622 Error Location: 0x28 00:26:33.622 LBA: 0x0 00:26:33.622 Namespace: 0x0 00:26:33.622 Vendor Log Page: 0x0 00:26:33.622 00:26:33.622 Number of Queues 00:26:33.622 ================ 00:26:33.622 Number of I/O Submission Queues: 128 00:26:33.622 Number of I/O Completion Queues: 128 00:26:33.622 00:26:33.622 ZNS Specific Controller Data 00:26:33.622 ============================ 00:26:33.622 Zone Append Size Limit: 0 00:26:33.622 00:26:33.622 00:26:33.622 Active Namespaces 00:26:33.622 ================= 00:26:33.622 get_feature(0x05) failed 00:26:33.622 Namespace ID:1 00:26:33.622 Command Set Identifier: NVM (00h) 00:26:33.622 Deallocate: Supported 00:26:33.622 Deallocated/Unwritten Error: Not Supported 00:26:33.622 Deallocated Read Value: Unknown 00:26:33.622 Deallocate in Write Zeroes: Not Supported 00:26:33.622 Deallocated Guard Field: 0xFFFF 00:26:33.622 Flush: Supported 00:26:33.622 Reservation: Not Supported 00:26:33.622 Namespace Sharing Capabilities: Multiple Controllers 00:26:33.622 Size (in LBAs): 3750748848 (1788GiB) 00:26:33.622 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:33.622 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:33.622 UUID: c6206329-991d-4447-9a93-254fff01fc51 00:26:33.622 Thin Provisioning: Not Supported 00:26:33.622 Per-NS Atomic Units: Yes 00:26:33.622 Atomic Write Unit (Normal): 8 00:26:33.622 Atomic Write Unit (PFail): 8 00:26:33.622 Preferred Write Granularity: 8 00:26:33.622 Atomic Compare & Write Unit: 8 00:26:33.622 Atomic Boundary Size (Normal): 0 00:26:33.622 Atomic Boundary Size (PFail): 0 00:26:33.622 Atomic Boundary Offset: 0 00:26:33.622 NGUID/EUI64 Never Reused: No 00:26:33.622 ANA group ID: 1 00:26:33.622 Namespace Write Protected: No 00:26:33.622 Number of LBA Formats: 1 00:26:33.622 Current LBA Format: LBA Format #00 00:26:33.622 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:33.622 00:26:33.622 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:33.622 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:33.622 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:33.622 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.622 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:33.622 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.622 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.622 rmmod nvme_tcp 00:26:33.623 rmmod nvme_fabrics 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.623 18:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:36.168 18:42:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:39.466 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:39.466 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:39.466 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:39.466 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:39.466 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:39.466 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:39.466 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:39.466 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:39.467 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:39.467 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:39.467 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:39.467 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:39.467 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:39.467 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:39.467 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:39.467 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:39.467 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:40.037 00:26:40.037 real 0m19.975s 00:26:40.037 user 0m5.481s 00:26:40.037 sys 0m11.506s 00:26:40.037 18:42:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:40.037 18:42:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.037 ************************************ 00:26:40.037 END TEST nvmf_identify_kernel_target 00:26:40.037 ************************************ 00:26:40.037 18:42:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:40.037 18:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:40.037 18:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.037 18:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.037 ************************************ 00:26:40.037 START TEST nvmf_auth_host 00:26:40.037 ************************************ 00:26:40.037 18:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:40.037 * Looking for test storage... 00:26:40.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:40.037 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:40.037 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:26:40.037 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:40.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.299 --rc genhtml_branch_coverage=1 00:26:40.299 --rc genhtml_function_coverage=1 00:26:40.299 --rc genhtml_legend=1 00:26:40.299 --rc geninfo_all_blocks=1 00:26:40.299 --rc geninfo_unexecuted_blocks=1 00:26:40.299 00:26:40.299 ' 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:40.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.299 --rc genhtml_branch_coverage=1 00:26:40.299 --rc genhtml_function_coverage=1 00:26:40.299 --rc genhtml_legend=1 00:26:40.299 --rc geninfo_all_blocks=1 00:26:40.299 --rc geninfo_unexecuted_blocks=1 00:26:40.299 00:26:40.299 ' 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:40.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.299 --rc genhtml_branch_coverage=1 00:26:40.299 --rc genhtml_function_coverage=1 00:26:40.299 --rc genhtml_legend=1 00:26:40.299 --rc geninfo_all_blocks=1 00:26:40.299 --rc geninfo_unexecuted_blocks=1 00:26:40.299 00:26:40.299 ' 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:40.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.299 --rc genhtml_branch_coverage=1 00:26:40.299 --rc genhtml_function_coverage=1 00:26:40.299 --rc genhtml_legend=1 00:26:40.299 --rc geninfo_all_blocks=1 00:26:40.299 --rc geninfo_unexecuted_blocks=1 00:26:40.299 00:26:40.299 ' 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.299 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:40.300 18:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:48.435 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:48.436 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:48.436 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:48.436 Found net devices under 0000:31:00.0: cvl_0_0 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:48.436 Found net devices under 0000:31:00.1: cvl_0_1 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:26:48.436 00:26:48.436 --- 10.0.0.2 ping statistics --- 00:26:48.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.436 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:26:48.436 00:26:48.436 --- 10.0.0.1 ping statistics --- 00:26:48.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.436 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1376656 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1376656 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1376656 ']' 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.436 18:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=43cc6a78d6899b593db42cfcaa16ef67 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.xxY 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 43cc6a78d6899b593db42cfcaa16ef67 0 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 43cc6a78d6899b593db42cfcaa16ef67 0 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=43cc6a78d6899b593db42cfcaa16ef67 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.xxY 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.xxY 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.xxY 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:49.007 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a658331d409855c1999def33c5ab2da56c607b6ea8cd6155480c3077a7b7a487 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.xHn 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a658331d409855c1999def33c5ab2da56c607b6ea8cd6155480c3077a7b7a487 3 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a658331d409855c1999def33c5ab2da56c607b6ea8cd6155480c3077a7b7a487 3 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a658331d409855c1999def33c5ab2da56c607b6ea8cd6155480c3077a7b7a487 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:49.008 18:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.xHn 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.xHn 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xHn 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8cbbb3bb65be36733144cd21b12719e9da466d4a109cbe15 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.qzQ 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8cbbb3bb65be36733144cd21b12719e9da466d4a109cbe15 0 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8cbbb3bb65be36733144cd21b12719e9da466d4a109cbe15 0 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8cbbb3bb65be36733144cd21b12719e9da466d4a109cbe15 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:49.008 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.qzQ 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.qzQ 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.qzQ 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=90323ad0b657f2199811c1e520b4361b8be23988bd82498b 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.1mN 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 90323ad0b657f2199811c1e520b4361b8be23988bd82498b 2 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 90323ad0b657f2199811c1e520b4361b8be23988bd82498b 2 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=90323ad0b657f2199811c1e520b4361b8be23988bd82498b 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.1mN 00:26:49.268 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.1mN 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1mN 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=58104b39298adb0919834c8858a3372c 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.4Wl 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 58104b39298adb0919834c8858a3372c 1 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 58104b39298adb0919834c8858a3372c 1 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=58104b39298adb0919834c8858a3372c 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.4Wl 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.4Wl 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.4Wl 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=9667caab9649daf90e9052d8a943973f 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.rQg 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 9667caab9649daf90e9052d8a943973f 1 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 9667caab9649daf90e9052d8a943973f 1 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=9667caab9649daf90e9052d8a943973f 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.rQg 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.rQg 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.rQg 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=eaff21283a2922e66a92be79d06068aca334d8cce1b7c981 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.1nG 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key eaff21283a2922e66a92be79d06068aca334d8cce1b7c981 2 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 eaff21283a2922e66a92be79d06068aca334d8cce1b7c981 2 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=eaff21283a2922e66a92be79d06068aca334d8cce1b7c981 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:49.269 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.1nG 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.1nG 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.1nG 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6b7c9cd01f8cf0fc98a1c8d5db08bf46 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Nay 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6b7c9cd01f8cf0fc98a1c8d5db08bf46 0 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6b7c9cd01f8cf0fc98a1c8d5db08bf46 0 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6b7c9cd01f8cf0fc98a1c8d5db08bf46 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Nay 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Nay 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Nay 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ee904d61bb832c3b8fdf648750d02d8b49fa185af04df153b2995784d1456c23 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.vEG 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ee904d61bb832c3b8fdf648750d02d8b49fa185af04df153b2995784d1456c23 3 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ee904d61bb832c3b8fdf648750d02d8b49fa185af04df153b2995784d1456c23 3 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ee904d61bb832c3b8fdf648750d02d8b49fa185af04df153b2995784d1456c23 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.vEG 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.vEG 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.vEG 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1376656 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1376656 ']' 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.530 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xxY 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xHn ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xHn 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.qzQ 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1mN ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1mN 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.4Wl 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.rQg ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rQg 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1nG 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Nay ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Nay 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.vEG 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:49.792 18:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:53.992 Waiting for block devices as requested 00:26:53.992 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:53.992 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:53.992 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:53.992 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:53.992 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:53.992 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:53.992 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:53.992 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:53.992 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:54.252 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:54.252 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:54.252 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:54.511 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:54.511 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:54.511 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:54.511 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:54.770 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:55.710 No valid GPT data, bailing 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:26:55.710 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:26:55.711 00:26:55.711 Discovery Log Number of Records 2, Generation counter 2 00:26:55.711 =====Discovery Log Entry 0====== 00:26:55.711 trtype: tcp 00:26:55.711 adrfam: ipv4 00:26:55.711 subtype: current discovery subsystem 00:26:55.711 treq: not specified, sq flow control disable supported 00:26:55.711 portid: 1 00:26:55.711 trsvcid: 4420 00:26:55.711 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:55.711 traddr: 10.0.0.1 00:26:55.711 eflags: none 00:26:55.711 sectype: none 00:26:55.711 =====Discovery Log Entry 1====== 00:26:55.711 trtype: tcp 00:26:55.711 adrfam: ipv4 00:26:55.711 subtype: nvme subsystem 00:26:55.711 treq: not specified, sq flow control disable supported 00:26:55.711 portid: 1 00:26:55.711 trsvcid: 4420 00:26:55.711 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:55.711 traddr: 10.0.0.1 00:26:55.711 eflags: none 00:26:55.711 sectype: none 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.711 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.971 nvme0n1 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.971 18:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.231 nvme0n1 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.231 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.492 nvme0n1 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.492 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.752 nvme0n1 00:26:56.752 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.753 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.013 nvme0n1 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.013 18:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.013 nvme0n1 00:26:57.013 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.013 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.013 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.013 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.013 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.273 nvme0n1 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.273 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.533 nvme0n1 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.533 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.794 nvme0n1 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.794 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.055 18:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.055 nvme0n1 00:26:58.055 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.055 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.316 nvme0n1 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.316 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.576 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.577 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.837 nvme0n1 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.837 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.838 18:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.098 nvme0n1 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.098 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.358 nvme0n1 00:26:59.358 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.358 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.358 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.358 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.358 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.358 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.618 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.879 nvme0n1 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:59.879 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.880 18:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.140 nvme0n1 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.140 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.141 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.711 nvme0n1 00:27:00.711 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.711 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.712 18:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.282 nvme0n1 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.282 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.543 nvme0n1 00:27:01.543 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.543 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.543 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.543 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.543 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.543 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.803 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.804 18:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.064 nvme0n1 00:27:02.064 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.064 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.064 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.064 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.064 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.064 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.064 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.064 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.064 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.064 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.324 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.584 nvme0n1 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.584 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.844 18:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.413 nvme0n1 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.413 18:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.982 nvme0n1 00:27:03.982 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.982 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.982 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.982 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.982 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.982 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.241 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.811 nvme0n1 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.811 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.812 18:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.420 nvme0n1 00:27:05.420 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.680 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.680 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.680 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.681 18:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.251 nvme0n1 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.251 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.511 nvme0n1 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.511 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.512 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.772 nvme0n1 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.772 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.032 nvme0n1 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.032 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.033 18:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.293 nvme0n1 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.293 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.553 nvme0n1 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.553 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.554 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.852 nvme0n1 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.852 nvme0n1 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.852 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.167 18:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.167 nvme0n1 00:27:08.167 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.167 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.167 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.167 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.167 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.167 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.167 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.167 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.167 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.167 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.430 nvme0n1 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.430 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.431 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.691 nvme0n1 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:08.691 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.951 18:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.211 nvme0n1 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.211 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.471 nvme0n1 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.471 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.472 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.732 nvme0n1 00:27:09.732 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.732 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.732 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.732 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.732 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.732 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.732 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.732 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.732 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.733 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.993 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.993 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.993 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.993 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.993 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.993 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.993 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.993 18:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.993 nvme0n1 00:27:09.993 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.993 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.993 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.993 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.993 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.252 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.253 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.512 nvme0n1 00:27:10.512 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.513 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.083 nvme0n1 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.083 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.084 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.084 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.084 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.084 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:11.084 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.084 18:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.344 nvme0n1 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.344 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.604 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.604 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.604 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.865 nvme0n1 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.865 18:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.436 nvme0n1 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:12.436 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.437 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.008 nvme0n1 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.008 18:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.578 nvme0n1 00:27:13.578 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.578 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.578 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.578 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.579 18:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.518 nvme0n1 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.518 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.519 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.519 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.519 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.088 nvme0n1 00:27:15.088 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.088 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.088 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.088 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.088 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.088 18:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.088 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.657 nvme0n1 00:27:15.657 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.657 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.657 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.657 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.657 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.657 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.916 18:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.484 nvme0n1 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.484 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.743 nvme0n1 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.743 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.002 nvme0n1 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:17.002 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.003 18:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.263 nvme0n1 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.263 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.264 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.264 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.264 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.524 nvme0n1 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.524 nvme0n1 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.524 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.784 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.785 nvme0n1 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.785 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.044 18:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.044 nvme0n1 00:27:18.044 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.044 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.044 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.044 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.044 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.303 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.304 nvme0n1 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.304 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.562 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.562 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.562 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.562 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.562 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.563 nvme0n1 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.563 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.821 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.822 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.080 nvme0n1 00:27:19.080 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.080 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.080 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.081 18:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.340 nvme0n1 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.340 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.600 nvme0n1 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.600 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.860 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.860 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.860 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.860 nvme0n1 00:27:19.860 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.119 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.120 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.120 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.120 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.120 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.120 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.120 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.120 18:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.120 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.120 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.120 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.379 nvme0n1 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.379 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.380 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.638 nvme0n1 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.638 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.639 18:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.205 nvme0n1 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.205 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.206 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.773 nvme0n1 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.773 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.774 18:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.033 nvme0n1 00:27:22.033 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.293 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 nvme0n1 00:27:22.553 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.553 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.553 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.553 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.553 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.813 18:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.072 nvme0n1 00:27:23.072 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.072 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.072 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.073 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.073 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.073 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.073 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.073 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.073 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.073 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjYzZhNzhkNjg5OWI1OTNkYjQyY2ZjYWExNmVmNjcIMTGM: 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: ]] 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY1ODMzMWQ0MDk4NTVjMTk5OWRlZjMzYzVhYjJkYTU2YzYwN2I2ZWE4Y2Q2MTU1NDgwYzMwNzdhN2I3YTQ4N+N/G+w=: 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:23.334 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.335 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.903 nvme0n1 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.903 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.904 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.904 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.904 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.904 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.904 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.904 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.904 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.904 18:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.473 nvme0n1 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.734 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.735 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.735 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.735 18:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.303 nvme0n1 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWFmZjIxMjgzYTI5MjJlNjZhOTJiZTc5ZDA2MDY4YWNhMzM0ZDhjY2UxYjdjOTgx6ax2Bg==: 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: ]] 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmI3YzljZDAxZjhjZjBmYzk4YTFjOGQ1ZGIwOGJmNDapUI2a: 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.303 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.304 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.304 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.304 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.304 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.304 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.304 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.304 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.304 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.304 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.240 nvme0n1 00:27:26.240 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.240 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.240 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.240 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.240 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.240 18:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWU5MDRkNjFiYjgzMmMzYjhmZGY2NDg3NTBkMDJkOGI0OWZhMTg1YWYwNGRmMTUzYjI5OTU3ODRkMTQ1NmMyM8lAVr4=: 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.240 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.806 nvme0n1 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.806 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.807 request: 00:27:26.807 { 00:27:26.807 "name": "nvme0", 00:27:26.807 "trtype": "tcp", 00:27:26.807 "traddr": "10.0.0.1", 00:27:26.807 "adrfam": "ipv4", 00:27:26.807 "trsvcid": "4420", 00:27:26.807 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:26.807 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:26.807 "prchk_reftag": false, 00:27:26.807 "prchk_guard": false, 00:27:26.807 "hdgst": false, 00:27:26.807 "ddgst": false, 00:27:26.807 "allow_unrecognized_csi": false, 00:27:26.807 "method": "bdev_nvme_attach_controller", 00:27:26.807 "req_id": 1 00:27:26.807 } 00:27:26.807 Got JSON-RPC error response 00:27:26.807 response: 00:27:26.807 { 00:27:26.807 "code": -5, 00:27:26.807 "message": "Input/output error" 00:27:26.807 } 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.807 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.066 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:27.066 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:27.066 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.066 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.066 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.066 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.067 request: 00:27:27.067 { 00:27:27.067 "name": "nvme0", 00:27:27.067 "trtype": "tcp", 00:27:27.067 "traddr": "10.0.0.1", 00:27:27.067 "adrfam": "ipv4", 00:27:27.067 "trsvcid": "4420", 00:27:27.067 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:27.067 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:27.067 "prchk_reftag": false, 00:27:27.067 "prchk_guard": false, 00:27:27.067 "hdgst": false, 00:27:27.067 "ddgst": false, 00:27:27.067 "dhchap_key": "key2", 00:27:27.067 "allow_unrecognized_csi": false, 00:27:27.067 "method": "bdev_nvme_attach_controller", 00:27:27.067 "req_id": 1 00:27:27.067 } 00:27:27.067 Got JSON-RPC error response 00:27:27.067 response: 00:27:27.067 { 00:27:27.067 "code": -5, 00:27:27.067 "message": "Input/output error" 00:27:27.067 } 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.067 18:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.067 request: 00:27:27.067 { 00:27:27.067 "name": "nvme0", 00:27:27.067 "trtype": "tcp", 00:27:27.067 "traddr": "10.0.0.1", 00:27:27.067 "adrfam": "ipv4", 00:27:27.067 "trsvcid": "4420", 00:27:27.067 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:27.067 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:27.067 "prchk_reftag": false, 00:27:27.067 "prchk_guard": false, 00:27:27.067 "hdgst": false, 00:27:27.067 "ddgst": false, 00:27:27.067 "dhchap_key": "key1", 00:27:27.067 "dhchap_ctrlr_key": "ckey2", 00:27:27.067 "allow_unrecognized_csi": false, 00:27:27.067 "method": "bdev_nvme_attach_controller", 00:27:27.067 "req_id": 1 00:27:27.067 } 00:27:27.067 Got JSON-RPC error response 00:27:27.067 response: 00:27:27.067 { 00:27:27.067 "code": -5, 00:27:27.067 "message": "Input/output error" 00:27:27.067 } 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.067 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.327 nvme0n1 00:27:27.327 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.327 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.328 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.587 request: 00:27:27.587 { 00:27:27.587 "name": "nvme0", 00:27:27.587 "dhchap_key": "key1", 00:27:27.587 "dhchap_ctrlr_key": "ckey2", 00:27:27.587 "method": "bdev_nvme_set_keys", 00:27:27.587 "req_id": 1 00:27:27.587 } 00:27:27.588 Got JSON-RPC error response 00:27:27.588 response: 00:27:27.588 { 00:27:27.588 "code": -13, 00:27:27.588 "message": "Permission denied" 00:27:27.588 } 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:27.588 18:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:28.527 18:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.527 18:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:28.527 18:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.527 18:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.527 18:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.527 18:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:28.527 18:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNiYmIzYmI2NWJlMzY3MzMxNDRjZDIxYjEyNzE5ZTlkYTQ2NmQ0YTEwOWNiZTE1pccu2g==: 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: ]] 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzMjNhZDBiNjU3ZjIxOTk4MTFjMWU1MjBiNDM2MWI4YmUyMzk4OGJkODI0OThi+yGw4g==: 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.904 nvme0n1 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxMDRiMzkyOThhZGIwOTE5ODM0Yzg4NThhMzM3MmOrgssM: 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: ]] 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY2N2NhYWI5NjQ5ZGFmOTBlOTA1MmQ4YTk0Mzk3M2aMFjeb: 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.904 request: 00:27:29.904 { 00:27:29.904 "name": "nvme0", 00:27:29.904 "dhchap_key": "key2", 00:27:29.904 "dhchap_ctrlr_key": "ckey1", 00:27:29.904 "method": "bdev_nvme_set_keys", 00:27:29.904 "req_id": 1 00:27:29.904 } 00:27:29.904 Got JSON-RPC error response 00:27:29.904 response: 00:27:29.904 { 00:27:29.904 "code": -13, 00:27:29.904 "message": "Permission denied" 00:27:29.904 } 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:29.904 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:29.905 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:29.905 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:29.905 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.905 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:29.905 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.905 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.905 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.905 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:29.905 18:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:30.843 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.843 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:30.843 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.843 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.843 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:31.101 rmmod nvme_tcp 00:27:31.101 rmmod nvme_fabrics 00:27:31.101 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:31.102 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:31.102 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:31.102 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1376656 ']' 00:27:31.102 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1376656 00:27:31.102 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1376656 ']' 00:27:31.102 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1376656 00:27:31.102 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:31.102 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.102 18:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1376656 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1376656' 00:27:31.102 killing process with pid 1376656 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1376656 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1376656 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.102 18:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:33.640 18:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:36.940 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:36.940 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:37.200 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:37.200 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:37.200 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:37.200 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:37.461 18:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.xxY /tmp/spdk.key-null.qzQ /tmp/spdk.key-sha256.4Wl /tmp/spdk.key-sha384.1nG /tmp/spdk.key-sha512.vEG /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:37.461 18:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:41.664 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:41.664 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:41.664 00:27:41.664 real 1m1.362s 00:27:41.664 user 0m55.125s 00:27:41.664 sys 0m16.225s 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.664 ************************************ 00:27:41.664 END TEST nvmf_auth_host 00:27:41.664 ************************************ 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.664 ************************************ 00:27:41.664 START TEST nvmf_digest 00:27:41.664 ************************************ 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:41.664 * Looking for test storage... 00:27:41.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:41.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.664 --rc genhtml_branch_coverage=1 00:27:41.664 --rc genhtml_function_coverage=1 00:27:41.664 --rc genhtml_legend=1 00:27:41.664 --rc geninfo_all_blocks=1 00:27:41.664 --rc geninfo_unexecuted_blocks=1 00:27:41.664 00:27:41.664 ' 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:41.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.664 --rc genhtml_branch_coverage=1 00:27:41.664 --rc genhtml_function_coverage=1 00:27:41.664 --rc genhtml_legend=1 00:27:41.664 --rc geninfo_all_blocks=1 00:27:41.664 --rc geninfo_unexecuted_blocks=1 00:27:41.664 00:27:41.664 ' 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:41.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.664 --rc genhtml_branch_coverage=1 00:27:41.664 --rc genhtml_function_coverage=1 00:27:41.664 --rc genhtml_legend=1 00:27:41.664 --rc geninfo_all_blocks=1 00:27:41.664 --rc geninfo_unexecuted_blocks=1 00:27:41.664 00:27:41.664 ' 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:41.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.664 --rc genhtml_branch_coverage=1 00:27:41.664 --rc genhtml_function_coverage=1 00:27:41.664 --rc genhtml_legend=1 00:27:41.664 --rc geninfo_all_blocks=1 00:27:41.664 --rc geninfo_unexecuted_blocks=1 00:27:41.664 00:27:41.664 ' 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.664 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:41.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:41.665 18:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:49.803 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:49.803 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:49.803 Found net devices under 0000:31:00.0: cvl_0_0 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:49.803 Found net devices under 0000:31:00.1: cvl_0_1 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.803 18:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.803 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.803 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.803 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:49.803 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.803 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.803 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.803 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:49.803 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:49.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:27:49.803 00:27:49.803 --- 10.0.0.2 ping statistics --- 00:27:49.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.804 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:27:49.804 00:27:49.804 --- 10.0.0.1 ping statistics --- 00:27:49.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.804 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:49.804 ************************************ 00:27:49.804 START TEST nvmf_digest_clean 00:27:49.804 ************************************ 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1393843 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1393843 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1393843 ']' 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:49.804 18:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:49.804 [2024-10-08 18:43:43.349602] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:27:49.804 [2024-10-08 18:43:43.349661] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.804 [2024-10-08 18:43:43.439311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.804 [2024-10-08 18:43:43.532378] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.804 [2024-10-08 18:43:43.532438] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.804 [2024-10-08 18:43:43.532447] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.804 [2024-10-08 18:43:43.532454] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.804 [2024-10-08 18:43:43.532460] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.804 [2024-10-08 18:43:43.533233] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.374 null0 00:27:50.374 [2024-10-08 18:43:44.304055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.374 [2024-10-08 18:43:44.328338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1394044 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1394044 /var/tmp/bperf.sock 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1394044 ']' 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:50.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:50.374 18:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.374 [2024-10-08 18:43:44.389044] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:27:50.374 [2024-10-08 18:43:44.389109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394044 ] 00:27:50.634 [2024-10-08 18:43:44.470625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.634 [2024-10-08 18:43:44.565643] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.203 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.203 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:51.203 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:51.203 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:51.203 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:51.463 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:51.463 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:51.723 nvme0n1 00:27:51.723 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:51.723 18:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:51.984 Running I/O for 2 seconds... 00:27:53.861 19178.00 IOPS, 74.91 MiB/s [2024-10-08T16:43:47.918Z] 19418.50 IOPS, 75.85 MiB/s 00:27:53.861 Latency(us) 00:27:53.861 [2024-10-08T16:43:47.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.861 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:53.861 nvme0n1 : 2.04 19064.80 74.47 0.00 0.00 6582.31 3153.92 46530.56 00:27:53.861 [2024-10-08T16:43:47.918Z] =================================================================================================================== 00:27:53.861 [2024-10-08T16:43:47.918Z] Total : 19064.80 74.47 0.00 0.00 6582.31 3153.92 46530.56 00:27:53.861 { 00:27:53.861 "results": [ 00:27:53.861 { 00:27:53.861 "job": "nvme0n1", 00:27:53.861 "core_mask": "0x2", 00:27:53.861 "workload": "randread", 00:27:53.861 "status": "finished", 00:27:53.861 "queue_depth": 128, 00:27:53.861 "io_size": 4096, 00:27:53.861 "runtime": 2.043819, 00:27:53.861 "iops": 19064.79976945121, 00:27:53.861 "mibps": 74.47187409941878, 00:27:53.861 "io_failed": 0, 00:27:53.861 "io_timeout": 0, 00:27:53.861 "avg_latency_us": 6582.309492792678, 00:27:53.861 "min_latency_us": 3153.92, 00:27:53.861 "max_latency_us": 46530.56 00:27:53.861 } 00:27:53.861 ], 00:27:53.861 "core_count": 1 00:27:53.861 } 00:27:53.861 18:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:53.861 18:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:53.861 18:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:53.861 18:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:53.861 | select(.opcode=="crc32c") 00:27:53.861 | "\(.module_name) \(.executed)"' 00:27:53.861 18:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1394044 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1394044 ']' 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1394044 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1394044 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1394044' 00:27:54.120 killing process with pid 1394044 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1394044 00:27:54.120 Received shutdown signal, test time was about 2.000000 seconds 00:27:54.120 00:27:54.120 Latency(us) 00:27:54.120 [2024-10-08T16:43:48.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.120 [2024-10-08T16:43:48.177Z] =================================================================================================================== 00:27:54.120 [2024-10-08T16:43:48.177Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.120 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1394044 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1394859 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1394859 /var/tmp/bperf.sock 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1394859 ']' 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:54.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:54.388 18:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.388 [2024-10-08 18:43:48.300526] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:27:54.388 [2024-10-08 18:43:48.300587] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394859 ] 00:27:54.388 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:54.388 Zero copy mechanism will not be used. 00:27:54.388 [2024-10-08 18:43:48.375731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.388 [2024-10-08 18:43:48.428679] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.330 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:55.330 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:55.330 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:55.330 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:55.330 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:55.330 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:55.330 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:55.898 nvme0n1 00:27:55.898 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:55.898 18:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:55.898 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:55.898 Zero copy mechanism will not be used. 00:27:55.898 Running I/O for 2 seconds... 00:27:57.930 3150.00 IOPS, 393.75 MiB/s [2024-10-08T16:43:51.987Z] 3084.00 IOPS, 385.50 MiB/s 00:27:57.930 Latency(us) 00:27:57.930 [2024-10-08T16:43:51.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.930 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:57.930 nvme0n1 : 2.01 3089.20 386.15 0.00 0.00 5175.00 757.76 8847.36 00:27:57.930 [2024-10-08T16:43:51.987Z] =================================================================================================================== 00:27:57.930 [2024-10-08T16:43:51.987Z] Total : 3089.20 386.15 0.00 0.00 5175.00 757.76 8847.36 00:27:57.930 { 00:27:57.930 "results": [ 00:27:57.930 { 00:27:57.930 "job": "nvme0n1", 00:27:57.930 "core_mask": "0x2", 00:27:57.930 "workload": "randread", 00:27:57.930 "status": "finished", 00:27:57.930 "queue_depth": 16, 00:27:57.930 "io_size": 131072, 00:27:57.930 "runtime": 2.006345, 00:27:57.930 "iops": 3089.1995145401215, 00:27:57.930 "mibps": 386.1499393175152, 00:27:57.930 "io_failed": 0, 00:27:57.930 "io_timeout": 0, 00:27:57.930 "avg_latency_us": 5175.004891900614, 00:27:57.930 "min_latency_us": 757.76, 00:27:57.930 "max_latency_us": 8847.36 00:27:57.930 } 00:27:57.930 ], 00:27:57.930 "core_count": 1 00:27:57.930 } 00:27:57.930 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:57.930 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:57.930 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:57.930 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:57.930 | select(.opcode=="crc32c") 00:27:57.930 | "\(.module_name) \(.executed)"' 00:27:57.930 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:58.190 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:58.190 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:58.190 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:58.190 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:58.190 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1394859 00:27:58.190 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1394859 ']' 00:27:58.190 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1394859 00:27:58.190 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:58.190 18:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1394859 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1394859' 00:27:58.190 killing process with pid 1394859 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1394859 00:27:58.190 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.190 00:27:58.190 Latency(us) 00:27:58.190 [2024-10-08T16:43:52.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.190 [2024-10-08T16:43:52.247Z] =================================================================================================================== 00:27:58.190 [2024-10-08T16:43:52.247Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1394859 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1395572 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1395572 /var/tmp/bperf.sock 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1395572 ']' 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:58.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:58.190 18:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.190 [2024-10-08 18:43:52.232426] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:27:58.190 [2024-10-08 18:43:52.232485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395572 ] 00:27:58.449 [2024-10-08 18:43:52.306982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.449 [2024-10-08 18:43:52.360211] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.017 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.017 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:59.017 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:59.017 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:59.017 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:59.276 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.276 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.537 nvme0n1 00:27:59.537 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:59.537 18:43:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:59.537 Running I/O for 2 seconds... 00:28:01.856 29566.00 IOPS, 115.49 MiB/s [2024-10-08T16:43:55.913Z] 29527.00 IOPS, 115.34 MiB/s 00:28:01.856 Latency(us) 00:28:01.856 [2024-10-08T16:43:55.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.856 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:01.856 nvme0n1 : 2.01 29527.00 115.34 0.00 0.00 4327.89 2143.57 9775.79 00:28:01.856 [2024-10-08T16:43:55.913Z] =================================================================================================================== 00:28:01.856 [2024-10-08T16:43:55.913Z] Total : 29527.00 115.34 0.00 0.00 4327.89 2143.57 9775.79 00:28:01.856 { 00:28:01.856 "results": [ 00:28:01.856 { 00:28:01.856 "job": "nvme0n1", 00:28:01.856 "core_mask": "0x2", 00:28:01.856 "workload": "randwrite", 00:28:01.856 "status": "finished", 00:28:01.856 "queue_depth": 128, 00:28:01.856 "io_size": 4096, 00:28:01.856 "runtime": 2.005419, 00:28:01.856 "iops": 29526.99660270497, 00:28:01.856 "mibps": 115.33983047931629, 00:28:01.856 "io_failed": 0, 00:28:01.856 "io_timeout": 0, 00:28:01.856 "avg_latency_us": 4327.89004041837, 00:28:01.856 "min_latency_us": 2143.5733333333333, 00:28:01.856 "max_latency_us": 9775.786666666667 00:28:01.856 } 00:28:01.856 ], 00:28:01.856 "core_count": 1 00:28:01.856 } 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:01.856 | select(.opcode=="crc32c") 00:28:01.856 | "\(.module_name) \(.executed)"' 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1395572 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1395572 ']' 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1395572 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1395572 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1395572' 00:28:01.856 killing process with pid 1395572 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1395572 00:28:01.856 Received shutdown signal, test time was about 2.000000 seconds 00:28:01.856 00:28:01.856 Latency(us) 00:28:01.856 [2024-10-08T16:43:55.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.856 [2024-10-08T16:43:55.913Z] =================================================================================================================== 00:28:01.856 [2024-10-08T16:43:55.913Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:01.856 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1395572 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1396254 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1396254 /var/tmp/bperf.sock 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1396254 ']' 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.116 18:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:02.116 [2024-10-08 18:43:55.987037] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:28:02.116 [2024-10-08 18:43:55.987097] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396254 ] 00:28:02.116 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:02.116 Zero copy mechanism will not be used. 00:28:02.116 [2024-10-08 18:43:56.064059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.116 [2024-10-08 18:43:56.117173] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.057 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:03.057 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:03.057 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:03.057 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:03.057 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:03.057 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.057 18:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.625 nvme0n1 00:28:03.625 18:43:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:03.625 18:43:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:03.625 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:03.625 Zero copy mechanism will not be used. 00:28:03.625 Running I/O for 2 seconds... 00:28:05.501 3999.00 IOPS, 499.88 MiB/s [2024-10-08T16:43:59.558Z] 3752.50 IOPS, 469.06 MiB/s 00:28:05.501 Latency(us) 00:28:05.501 [2024-10-08T16:43:59.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.501 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:05.501 nvme0n1 : 2.01 3748.50 468.56 0.00 0.00 4260.64 1495.04 6717.44 00:28:05.501 [2024-10-08T16:43:59.558Z] =================================================================================================================== 00:28:05.501 [2024-10-08T16:43:59.558Z] Total : 3748.50 468.56 0.00 0.00 4260.64 1495.04 6717.44 00:28:05.501 { 00:28:05.501 "results": [ 00:28:05.501 { 00:28:05.501 "job": "nvme0n1", 00:28:05.501 "core_mask": "0x2", 00:28:05.501 "workload": "randwrite", 00:28:05.501 "status": "finished", 00:28:05.501 "queue_depth": 16, 00:28:05.501 "io_size": 131072, 00:28:05.501 "runtime": 2.007204, 00:28:05.501 "iops": 3748.497910526284, 00:28:05.501 "mibps": 468.5622388157855, 00:28:05.501 "io_failed": 0, 00:28:05.501 "io_timeout": 0, 00:28:05.501 "avg_latency_us": 4260.637986886408, 00:28:05.501 "min_latency_us": 1495.04, 00:28:05.501 "max_latency_us": 6717.44 00:28:05.501 } 00:28:05.501 ], 00:28:05.501 "core_count": 1 00:28:05.501 } 00:28:05.501 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:05.501 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:05.501 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:05.501 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:05.501 | select(.opcode=="crc32c") 00:28:05.501 | "\(.module_name) \(.executed)"' 00:28:05.501 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1396254 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1396254 ']' 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1396254 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1396254 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1396254' 00:28:05.760 killing process with pid 1396254 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1396254 00:28:05.760 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.760 00:28:05.760 Latency(us) 00:28:05.760 [2024-10-08T16:43:59.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.760 [2024-10-08T16:43:59.817Z] =================================================================================================================== 00:28:05.760 [2024-10-08T16:43:59.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.760 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1396254 00:28:06.019 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1393843 00:28:06.019 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1393843 ']' 00:28:06.019 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1393843 00:28:06.019 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:06.019 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:06.020 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1393843 00:28:06.020 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:06.020 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:06.020 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1393843' 00:28:06.020 killing process with pid 1393843 00:28:06.020 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1393843 00:28:06.020 18:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1393843 00:28:06.278 00:28:06.278 real 0m16.808s 00:28:06.278 user 0m33.232s 00:28:06.278 sys 0m3.693s 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.278 ************************************ 00:28:06.278 END TEST nvmf_digest_clean 00:28:06.278 ************************************ 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:06.278 ************************************ 00:28:06.278 START TEST nvmf_digest_error 00:28:06.278 ************************************ 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1397081 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1397081 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1397081 ']' 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.278 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.279 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.279 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.279 18:44:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.279 [2024-10-08 18:44:00.232756] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:28:06.279 [2024-10-08 18:44:00.232808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.279 [2024-10-08 18:44:00.318816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.538 [2024-10-08 18:44:00.375961] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.538 [2024-10-08 18:44:00.375997] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.538 [2024-10-08 18:44:00.376003] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.538 [2024-10-08 18:44:00.376008] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.538 [2024-10-08 18:44:00.376012] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.538 [2024-10-08 18:44:00.376492] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.106 [2024-10-08 18:44:01.062379] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.106 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.106 null0 00:28:07.106 [2024-10-08 18:44:01.140161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.366 [2024-10-08 18:44:01.164346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1397315 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1397315 /var/tmp/bperf.sock 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1397315 ']' 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:07.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:07.366 18:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.366 [2024-10-08 18:44:01.223489] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:28:07.366 [2024-10-08 18:44:01.223535] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1397315 ] 00:28:07.366 [2024-10-08 18:44:01.299214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.366 [2024-10-08 18:44:01.352778] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.306 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.306 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:08.306 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:08.306 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:08.307 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:08.307 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.307 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.307 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.307 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:08.307 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:08.565 nvme0n1 00:28:08.565 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:08.565 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.565 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.565 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.565 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:08.565 18:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:08.825 Running I/O for 2 seconds... 00:28:08.825 [2024-10-08 18:44:02.698602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.825 [2024-10-08 18:44:02.698633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.825 [2024-10-08 18:44:02.698642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.825 [2024-10-08 18:44:02.710087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.825 [2024-10-08 18:44:02.710107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.825 [2024-10-08 18:44:02.710113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.825 [2024-10-08 18:44:02.720591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.825 [2024-10-08 18:44:02.720610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.825 [2024-10-08 18:44:02.720616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.825 [2024-10-08 18:44:02.728556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.825 [2024-10-08 18:44:02.728574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.825 [2024-10-08 18:44:02.728580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.738201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.738219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.738225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.747041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.747058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.747064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.757639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.757656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.757662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.765339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.765355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.765362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.774461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.774478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.774485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.783500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.783518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.783524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.793586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.793603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.793609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.801070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.801087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.801093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.810654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.810671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.810677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.820115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.820132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.820144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.828446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.828463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.828470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.837008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.837025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.837032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.845915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.845933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.845939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.855720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.855736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.855743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.864802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.864819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.864825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.826 [2024-10-08 18:44:02.873193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:08.826 [2024-10-08 18:44:02.873210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.826 [2024-10-08 18:44:02.873216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.086 [2024-10-08 18:44:02.882331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.086 [2024-10-08 18:44:02.882349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.086 [2024-10-08 18:44:02.882355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.086 [2024-10-08 18:44:02.891049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.086 [2024-10-08 18:44:02.891066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.086 [2024-10-08 18:44:02.891073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.086 [2024-10-08 18:44:02.898447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.086 [2024-10-08 18:44:02.898467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.086 [2024-10-08 18:44:02.898473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:02.909028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:02.909044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:02.909051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:02.918744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:02.918761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:02.918767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:02.927938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:02.927955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:02.927961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:02.935840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:02.935857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:02.935864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:02.946008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:02.946026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:02.946032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:02.954576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:02.954594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:02.954600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:02.963784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:02.963801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:02.963807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:02.972073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:02.972091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:02.972100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:02.981821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:02.981838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:02.981845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:02.991246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:02.991262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:02.991269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.001981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.001998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.002004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.010752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.010768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.010775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.022086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.022102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.022109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.033241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.033258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.033264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.044300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.044317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.044323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.053297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.053314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.053321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.061196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.061216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.061222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.069953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.069970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.069980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.079080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.079097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.079103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.089746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.089763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.089769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.098880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.098897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.098903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.108127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.108143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.108149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.116396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.116413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.116419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.125905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.125922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.125929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.087 [2024-10-08 18:44:03.136660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.087 [2024-10-08 18:44:03.136677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.087 [2024-10-08 18:44:03.136683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.347 [2024-10-08 18:44:03.145580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.347 [2024-10-08 18:44:03.145596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.347 [2024-10-08 18:44:03.145603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.347 [2024-10-08 18:44:03.155077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.347 [2024-10-08 18:44:03.155094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.347 [2024-10-08 18:44:03.155100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.347 [2024-10-08 18:44:03.164563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.347 [2024-10-08 18:44:03.164580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.347 [2024-10-08 18:44:03.164586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.347 [2024-10-08 18:44:03.173824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.347 [2024-10-08 18:44:03.173841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.347 [2024-10-08 18:44:03.173847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.347 [2024-10-08 18:44:03.181473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.347 [2024-10-08 18:44:03.181490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.347 [2024-10-08 18:44:03.181497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.347 [2024-10-08 18:44:03.191272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.347 [2024-10-08 18:44:03.191288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.347 [2024-10-08 18:44:03.191295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.347 [2024-10-08 18:44:03.200119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.200136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.200142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.209609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.209626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.209632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.218211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.218228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.218238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.226463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.226480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.226486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.236248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.236265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.236271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.244753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.244770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.244776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.252843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.252860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.252866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.262475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.262492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.262498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.271623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.271640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.271646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.280157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.280173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.280180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.288771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.288787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.288793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.297389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.297409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.297415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.307554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.307571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.307577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.316325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.316341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.316348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.327645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.327662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.327668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.336449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.336466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.336472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.345194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.345210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.345216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.354294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.354311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.354318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.363062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.363079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.363085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.371806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.371823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.371829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.381053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.381069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.381075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.391188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.391205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.391212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.348 [2024-10-08 18:44:03.400748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.348 [2024-10-08 18:44:03.400765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.348 [2024-10-08 18:44:03.400771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.409232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.409249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.409256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.417509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.417526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.417533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.425093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.425110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.425117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.438962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.438983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.438989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.448208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.448226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.448232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.458402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.458420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.458429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.467894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.467911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.467917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.476803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.476820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.476826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.484990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.485008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.485014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.494795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.494812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.494819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.503054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.503071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.503078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.511804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.511821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.511827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.521043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.521060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.521066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.529432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.609 [2024-10-08 18:44:03.529449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.609 [2024-10-08 18:44:03.529455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.609 [2024-10-08 18:44:03.538501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.538521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.538527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.547362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.547380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.547386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.556784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.556801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.556807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.566381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.566398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.566404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.574892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.574909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.574916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.586263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.586281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.586287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.597037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.597055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.597061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.607120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.607137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.607143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.616156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.616173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.616179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.625742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.625759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.625765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.633597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.633614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.633620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.642605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.642622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.642629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.651328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.651345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.651351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.610 [2024-10-08 18:44:03.659888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.610 [2024-10-08 18:44:03.659905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.610 [2024-10-08 18:44:03.659912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.668271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.668288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.668295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.677627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.677643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.677649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 27522.00 IOPS, 107.51 MiB/s [2024-10-08T16:44:03.928Z] [2024-10-08 18:44:03.687813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.687828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.687835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.695445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.695465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.695472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.705068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.705085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.705092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.714111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.714128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.714134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.722800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.722817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.722823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.732164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.732181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.732187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.740233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.740249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.740255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.748671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.748688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.748694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.757855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.757873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.757879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.766943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.766960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.766967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.775541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.775558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.775564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.784496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.784514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.871 [2024-10-08 18:44:03.784520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.871 [2024-10-08 18:44:03.793275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.871 [2024-10-08 18:44:03.793292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.793299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.803003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.803021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.803027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.810566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.810583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.810589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.819720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.819738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.819744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.829112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.829130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.829136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.837762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.837779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.837785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.846651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.846668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.846678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.855336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.855353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.855359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.864735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.864753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.864759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.872353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.872370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.872376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.882174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.882191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.882197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.890665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.890683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.890689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.901595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.901613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.901619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.910128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.910145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.910151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.872 [2024-10-08 18:44:03.918237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:09.872 [2024-10-08 18:44:03.918254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.872 [2024-10-08 18:44:03.918260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.133 [2024-10-08 18:44:03.928474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.133 [2024-10-08 18:44:03.928494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.133 [2024-10-08 18:44:03.928501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.133 [2024-10-08 18:44:03.940946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.133 [2024-10-08 18:44:03.940963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.133 [2024-10-08 18:44:03.940970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.133 [2024-10-08 18:44:03.952886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.133 [2024-10-08 18:44:03.952903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:03.952909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:03.960361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:03.960378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:03.960384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:03.970072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:03.970089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:03.970095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:03.978780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:03.978797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:03.978804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:03.987945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:03.987962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:03.987968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:03.996959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:03.996981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:03.996988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.006166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.006182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.006189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.015755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.015773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.015779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.024810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.024827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.024833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.032537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.032554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.032561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.042280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.042298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.042304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.051309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.051326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.051332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.060058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.060075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.060081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.068785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.068801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.068808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.077457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.077474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.077480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.087025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.087043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.087053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.096176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.096193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.096199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.104341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.104358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.104364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.113347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.113364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.113370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.122576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.122593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.122599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.131656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.131674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.131680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.140736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.140753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.140760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.148557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.148574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.148580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.157724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.157741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.157747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.167166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.167183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.167189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.174764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.174781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.174787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.134 [2024-10-08 18:44:04.184965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.134 [2024-10-08 18:44:04.184986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.134 [2024-10-08 18:44:04.184993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.193500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.193518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.193524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.204798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.204815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.204821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.216418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.216435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.216441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.225556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.225573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.225579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.233942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.233959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.233965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.242804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.242821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.242831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.251241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.251258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.251264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.260506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.260523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.260529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.269800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.269817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.269823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.277639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.277656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.277662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.287435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.287452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.287458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.296378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.296396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.296402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.305572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.305588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.305595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.315569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.315587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.315594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.323546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.323566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.323572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.332703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.332720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.332726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.342825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.342843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.342849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.351499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.351516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.351522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.362040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.362057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.362063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.371592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.371609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.371615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.379372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.379388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.379394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.389439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.389455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.389461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.398488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.398505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.398511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.407903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.407918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.407925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.416887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.395 [2024-10-08 18:44:04.416904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.395 [2024-10-08 18:44:04.416910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.395 [2024-10-08 18:44:04.425701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.396 [2024-10-08 18:44:04.425719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.396 [2024-10-08 18:44:04.425725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.396 [2024-10-08 18:44:04.434697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.396 [2024-10-08 18:44:04.434714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.396 [2024-10-08 18:44:04.434721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.396 [2024-10-08 18:44:04.444220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.396 [2024-10-08 18:44:04.444236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.396 [2024-10-08 18:44:04.444243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.452031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.452048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.452054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.461707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.461723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.461730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.470110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.470126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.470132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.479112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.479128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.479138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.488092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.488109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.488115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.499078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.499095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.499101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.506985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.507002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.507008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.516289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.516306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.516312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.526064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.526081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.526087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.534765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.534782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.534788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.543423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.543440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.543446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.552792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.552809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.552816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.561113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.657 [2024-10-08 18:44:04.561133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.657 [2024-10-08 18:44:04.561139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.657 [2024-10-08 18:44:04.569581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.569598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.569604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.578855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.578873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.578879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.588360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.588377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.588383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.597182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.597198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.597205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.605915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.605932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.605939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.613898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.613915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.613921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.623406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.623423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.623429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.632422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.632439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.632445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.641273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.641290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.641296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.650164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.650181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.650187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.657946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.657964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.657970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.667119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.667136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.667142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.678272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.678289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.678295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 [2024-10-08 18:44:04.686299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ebaf0) 00:28:10.658 [2024-10-08 18:44:04.686316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.658 [2024-10-08 18:44:04.686322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.658 27796.00 IOPS, 108.58 MiB/s 00:28:10.658 Latency(us) 00:28:10.658 [2024-10-08T16:44:04.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.658 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:10.658 nvme0n1 : 2.00 27814.81 108.65 0.00 0.00 4597.70 2252.80 13434.88 00:28:10.658 [2024-10-08T16:44:04.715Z] =================================================================================================================== 00:28:10.658 [2024-10-08T16:44:04.715Z] Total : 27814.81 108.65 0.00 0.00 4597.70 2252.80 13434.88 00:28:10.658 { 00:28:10.658 "results": [ 00:28:10.658 { 00:28:10.658 "job": "nvme0n1", 00:28:10.658 "core_mask": "0x2", 00:28:10.658 "workload": "randread", 00:28:10.658 "status": "finished", 00:28:10.658 "queue_depth": 128, 00:28:10.658 "io_size": 4096, 00:28:10.658 "runtime": 2.003249, 00:28:10.658 "iops": 27814.8148333033, 00:28:10.658 "mibps": 108.65162044259101, 00:28:10.658 "io_failed": 0, 00:28:10.658 "io_timeout": 0, 00:28:10.658 "avg_latency_us": 4597.69751615219, 00:28:10.658 "min_latency_us": 2252.8, 00:28:10.658 "max_latency_us": 13434.88 00:28:10.658 } 00:28:10.658 ], 00:28:10.658 "core_count": 1 00:28:10.658 } 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:10.918 | .driver_specific 00:28:10.918 | .nvme_error 00:28:10.918 | .status_code 00:28:10.918 | .command_transient_transport_error' 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1397315 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1397315 ']' 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1397315 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.918 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1397315 00:28:11.176 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:11.176 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:11.176 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1397315' 00:28:11.176 killing process with pid 1397315 00:28:11.176 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1397315 00:28:11.176 Received shutdown signal, test time was about 2.000000 seconds 00:28:11.176 00:28:11.176 Latency(us) 00:28:11.176 [2024-10-08T16:44:05.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.176 [2024-10-08T16:44:05.233Z] =================================================================================================================== 00:28:11.176 [2024-10-08T16:44:05.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:11.176 18:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1397315 00:28:11.176 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1398006 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1398006 /var/tmp/bperf.sock 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1398006 ']' 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:11.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.177 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.177 [2024-10-08 18:44:05.144050] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:28:11.177 [2024-10-08 18:44:05.144109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398006 ] 00:28:11.177 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:11.177 Zero copy mechanism will not be used. 00:28:11.177 [2024-10-08 18:44:05.222966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.435 [2024-10-08 18:44:05.276380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.003 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.003 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:12.003 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.003 18:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.262 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:12.262 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.262 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.262 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.262 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.262 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.522 nvme0n1 00:28:12.522 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:12.522 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.522 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.522 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.522 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:12.522 18:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.782 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:12.782 Zero copy mechanism will not be used. 00:28:12.782 Running I/O for 2 seconds... 00:28:12.782 [2024-10-08 18:44:06.602878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.602910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.602918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.614865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.614892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.614899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.626954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.626972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.626983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.639749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.639767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.639774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.651754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.651772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.651779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.663664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.663683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.663690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.674881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.674899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.674905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.686857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.686874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.686880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.698292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.698310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.698316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.710379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.710397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.710403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.721938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.721956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.721962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.734858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.734875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.734882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.782 [2024-10-08 18:44:06.745253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.782 [2024-10-08 18:44:06.745270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-10-08 18:44:06.745276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.783 [2024-10-08 18:44:06.756526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.783 [2024-10-08 18:44:06.756543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.783 [2024-10-08 18:44:06.756550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.783 [2024-10-08 18:44:06.767824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.783 [2024-10-08 18:44:06.767842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.783 [2024-10-08 18:44:06.767849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.783 [2024-10-08 18:44:06.778677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.783 [2024-10-08 18:44:06.778695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.783 [2024-10-08 18:44:06.778701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.783 [2024-10-08 18:44:06.791042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.783 [2024-10-08 18:44:06.791060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.783 [2024-10-08 18:44:06.791066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.783 [2024-10-08 18:44:06.799772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.783 [2024-10-08 18:44:06.799789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.783 [2024-10-08 18:44:06.799795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.783 [2024-10-08 18:44:06.810008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.783 [2024-10-08 18:44:06.810024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.783 [2024-10-08 18:44:06.810034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.783 [2024-10-08 18:44:06.820052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.783 [2024-10-08 18:44:06.820070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.783 [2024-10-08 18:44:06.820077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.783 [2024-10-08 18:44:06.831221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:12.783 [2024-10-08 18:44:06.831238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.783 [2024-10-08 18:44:06.831244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.043 [2024-10-08 18:44:06.842565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.043 [2024-10-08 18:44:06.842583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.043 [2024-10-08 18:44:06.842589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.043 [2024-10-08 18:44:06.853402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.043 [2024-10-08 18:44:06.853420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.043 [2024-10-08 18:44:06.853426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.043 [2024-10-08 18:44:06.864756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.043 [2024-10-08 18:44:06.864774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.043 [2024-10-08 18:44:06.864780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.043 [2024-10-08 18:44:06.875834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.875852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.875858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:06.889626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.889644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.889650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:06.899634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.899653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.899659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:06.908023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.908044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.908050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:06.919829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.919847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.919853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:06.930754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.930773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.930779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:06.942396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.942414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.942420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:06.952153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.952172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.952178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:06.964814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.964833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.964841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:06.978426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.978445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.978451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:06.992798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:06.992818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:06.992824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:07.003866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:07.003884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:07.003891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:07.015679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:07.015697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:07.015704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:07.027411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:07.027430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:07.027436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:07.039150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:07.039169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:07.039175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:07.050606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:07.050623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:07.050629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:07.061488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:07.061506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:07.061512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:07.072003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:07.072022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:07.072028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:07.081721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:07.081740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:07.081746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.044 [2024-10-08 18:44:07.092464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.044 [2024-10-08 18:44:07.092483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.044 [2024-10-08 18:44:07.092489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.102899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.102917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.102927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.113465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.113484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.113490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.124878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.124898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.124905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.132450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.132467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.132473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.144952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.144972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.144984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.157039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.157057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.157063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.169401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.169419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.169426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.181407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.181426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.181433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.192951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.192970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.192981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.203688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.203709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.203715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.214792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.214810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.214816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.225448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.225466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.225472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.236574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.236592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.236598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.246611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.246629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.246636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.255902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.255920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.255926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.267129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.267147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.267153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.278064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.278082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.278089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.289935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.289953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.305 [2024-10-08 18:44:07.289959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.305 [2024-10-08 18:44:07.301747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.305 [2024-10-08 18:44:07.301766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.306 [2024-10-08 18:44:07.301772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.306 [2024-10-08 18:44:07.314247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.306 [2024-10-08 18:44:07.314265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.306 [2024-10-08 18:44:07.314272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.306 [2024-10-08 18:44:07.326966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.306 [2024-10-08 18:44:07.326989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.306 [2024-10-08 18:44:07.326996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.306 [2024-10-08 18:44:07.338667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.306 [2024-10-08 18:44:07.338686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.306 [2024-10-08 18:44:07.338692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.306 [2024-10-08 18:44:07.349632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.306 [2024-10-08 18:44:07.349650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.306 [2024-10-08 18:44:07.349657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.306 [2024-10-08 18:44:07.361185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.306 [2024-10-08 18:44:07.361204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.306 [2024-10-08 18:44:07.361210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.372767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.372786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.566 [2024-10-08 18:44:07.372792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.385184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.385203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.566 [2024-10-08 18:44:07.385209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.396578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.396596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.566 [2024-10-08 18:44:07.396605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.407204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.407223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.566 [2024-10-08 18:44:07.407229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.418394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.418412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.566 [2024-10-08 18:44:07.418418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.427988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.428007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.566 [2024-10-08 18:44:07.428013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.439235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.439254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.566 [2024-10-08 18:44:07.439260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.449719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.449738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.566 [2024-10-08 18:44:07.449744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.461205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.461223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.566 [2024-10-08 18:44:07.461229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.471888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.471907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.566 [2024-10-08 18:44:07.471913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.566 [2024-10-08 18:44:07.483191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.566 [2024-10-08 18:44:07.483210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.483216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.492188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.492207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.492213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.501604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.501624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.501630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.512494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.512513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.512519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.524602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.524620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.524627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.535290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.535308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.535315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.545961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.545984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.545990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.556747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.556766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.556772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.568439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.568458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.568464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.578729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.578747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.578757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.587841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.587859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.587865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.567 2783.00 IOPS, 347.88 MiB/s [2024-10-08T16:44:07.624Z] [2024-10-08 18:44:07.594590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.594608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.594614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.600651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.600669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.600675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.606879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.606897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.606903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.567 [2024-10-08 18:44:07.617009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.567 [2024-10-08 18:44:07.617027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.567 [2024-10-08 18:44:07.617033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.827 [2024-10-08 18:44:07.627774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.827 [2024-10-08 18:44:07.627792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.827 [2024-10-08 18:44:07.627799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.827 [2024-10-08 18:44:07.638116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.827 [2024-10-08 18:44:07.638134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.827 [2024-10-08 18:44:07.638140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.827 [2024-10-08 18:44:07.648997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.827 [2024-10-08 18:44:07.649015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.649021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.661026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.661048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.661054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.672189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.672207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.672213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.683204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.683222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.683228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.692984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.693001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.693007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.701995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.702013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.702019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.713820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.713838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.713844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.724015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.724033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.724039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.733550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.733569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.733575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.744679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.744697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.744703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.754578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.754595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.754602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.764159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.764178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.764184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.775991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.776009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.776015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.787284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.787302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.787308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.796466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.796484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.796490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.807718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.807736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.807743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.819349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.819368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.819374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.829807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.829825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.829831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.840918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.840936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.840945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.852235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.852253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.852260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.861004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.861022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.861028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.872357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.872375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.872381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.828 [2024-10-08 18:44:07.882986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:13.828 [2024-10-08 18:44:07.883004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-10-08 18:44:07.883010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:07.894547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:07.894565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:07.894572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:07.906253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:07.906270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:07.906277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:07.915830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:07.915848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:07.915854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:07.925502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:07.925520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:07.925526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:07.937886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:07.937907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:07.937914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:07.949602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:07.949620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:07.949626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:07.961532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:07.961551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:07.961557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:07.972866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:07.972883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:07.972890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:07.983538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:07.983556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:07.983563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:07.994252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:07.994270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:07.994276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:08.006318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:08.006336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:08.006342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:08.017466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:08.017485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:08.017491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:08.025339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:08.025357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:08.025363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:08.036464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:08.036482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:08.036488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:08.048696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:08.048714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:08.048720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:08.058759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:08.058778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:08.058785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:08.070530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:08.070548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:08.070555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:08.082162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:08.082180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:08.082187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:08.094009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.089 [2024-10-08 18:44:08.094027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.089 [2024-10-08 18:44:08.094033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.089 [2024-10-08 18:44:08.105662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.090 [2024-10-08 18:44:08.105681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.090 [2024-10-08 18:44:08.105687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.090 [2024-10-08 18:44:08.117132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.090 [2024-10-08 18:44:08.117151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.090 [2024-10-08 18:44:08.117157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.090 [2024-10-08 18:44:08.129695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.090 [2024-10-08 18:44:08.129714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.090 [2024-10-08 18:44:08.129726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.090 [2024-10-08 18:44:08.142302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.090 [2024-10-08 18:44:08.142320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.090 [2024-10-08 18:44:08.142327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.155056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.155074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.155080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.168132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.168149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.168156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.180154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.180171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.180178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.192670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.192688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.192694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.204622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.204640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.204646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.216695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.216713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.216720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.228891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.228909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.228915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.241182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.241202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.241208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.252480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.252499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.252506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.262364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.262382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.262388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.273890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.273908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.273914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.282986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.283004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.283010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.293423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.293440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.293446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.301597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.301614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.350 [2024-10-08 18:44:08.301620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.350 [2024-10-08 18:44:08.310473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.350 [2024-10-08 18:44:08.310491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.351 [2024-10-08 18:44:08.310497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.351 [2024-10-08 18:44:08.321010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.351 [2024-10-08 18:44:08.321027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.351 [2024-10-08 18:44:08.321033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.351 [2024-10-08 18:44:08.329485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.351 [2024-10-08 18:44:08.329502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.351 [2024-10-08 18:44:08.329508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.351 [2024-10-08 18:44:08.341329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.351 [2024-10-08 18:44:08.341347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.351 [2024-10-08 18:44:08.341353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.351 [2024-10-08 18:44:08.353211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.351 [2024-10-08 18:44:08.353229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.351 [2024-10-08 18:44:08.353236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.351 [2024-10-08 18:44:08.365634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.351 [2024-10-08 18:44:08.365651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.351 [2024-10-08 18:44:08.365658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.351 [2024-10-08 18:44:08.378185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.351 [2024-10-08 18:44:08.378203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.351 [2024-10-08 18:44:08.378209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.351 [2024-10-08 18:44:08.390616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.351 [2024-10-08 18:44:08.390634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.351 [2024-10-08 18:44:08.390640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.351 [2024-10-08 18:44:08.403369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.351 [2024-10-08 18:44:08.403387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.351 [2024-10-08 18:44:08.403393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.610 [2024-10-08 18:44:08.415970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.610 [2024-10-08 18:44:08.415992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.610 [2024-10-08 18:44:08.415998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.610 [2024-10-08 18:44:08.428383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.610 [2024-10-08 18:44:08.428400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.610 [2024-10-08 18:44:08.428410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.610 [2024-10-08 18:44:08.439618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.610 [2024-10-08 18:44:08.439636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.610 [2024-10-08 18:44:08.439642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.610 [2024-10-08 18:44:08.451969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.610 [2024-10-08 18:44:08.451993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.610 [2024-10-08 18:44:08.452000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.610 [2024-10-08 18:44:08.464827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.610 [2024-10-08 18:44:08.464845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.610 [2024-10-08 18:44:08.464851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.610 [2024-10-08 18:44:08.477446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.610 [2024-10-08 18:44:08.477462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.610 [2024-10-08 18:44:08.477469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.610 [2024-10-08 18:44:08.488526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.610 [2024-10-08 18:44:08.488544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.610 [2024-10-08 18:44:08.488551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.610 [2024-10-08 18:44:08.495478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.610 [2024-10-08 18:44:08.495496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.610 [2024-10-08 18:44:08.495502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.610 [2024-10-08 18:44:08.506607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.610 [2024-10-08 18:44:08.506625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.610 [2024-10-08 18:44:08.506631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.610 [2024-10-08 18:44:08.513422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.610 [2024-10-08 18:44:08.513439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.611 [2024-10-08 18:44:08.513446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.611 [2024-10-08 18:44:08.525261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.611 [2024-10-08 18:44:08.525280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.611 [2024-10-08 18:44:08.525286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.611 [2024-10-08 18:44:08.535459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.611 [2024-10-08 18:44:08.535476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.611 [2024-10-08 18:44:08.535482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.611 [2024-10-08 18:44:08.547261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.611 [2024-10-08 18:44:08.547279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.611 [2024-10-08 18:44:08.547285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.611 [2024-10-08 18:44:08.558320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.611 [2024-10-08 18:44:08.558337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.611 [2024-10-08 18:44:08.558344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.611 [2024-10-08 18:44:08.567503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.611 [2024-10-08 18:44:08.567521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.611 [2024-10-08 18:44:08.567527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.611 [2024-10-08 18:44:08.577619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.611 [2024-10-08 18:44:08.577637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.611 [2024-10-08 18:44:08.577643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.611 [2024-10-08 18:44:08.585924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.611 [2024-10-08 18:44:08.585941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.611 [2024-10-08 18:44:08.585947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.611 2821.00 IOPS, 352.62 MiB/s [2024-10-08T16:44:08.668Z] [2024-10-08 18:44:08.596552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa678c0) 00:28:14.611 [2024-10-08 18:44:08.596568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.611 [2024-10-08 18:44:08.596575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.611 00:28:14.611 Latency(us) 00:28:14.611 [2024-10-08T16:44:08.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.611 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:14.611 nvme0n1 : 2.00 2825.92 353.24 0.00 0.00 5658.24 1140.05 15619.41 00:28:14.611 [2024-10-08T16:44:08.668Z] =================================================================================================================== 00:28:14.611 [2024-10-08T16:44:08.668Z] Total : 2825.92 353.24 0.00 0.00 5658.24 1140.05 15619.41 00:28:14.611 { 00:28:14.611 "results": [ 00:28:14.611 { 00:28:14.611 "job": "nvme0n1", 00:28:14.611 "core_mask": "0x2", 00:28:14.611 "workload": "randread", 00:28:14.611 "status": "finished", 00:28:14.611 "queue_depth": 16, 00:28:14.611 "io_size": 131072, 00:28:14.611 "runtime": 2.002183, 00:28:14.611 "iops": 2825.915513217323, 00:28:14.611 "mibps": 353.2394391521654, 00:28:14.611 "io_failed": 0, 00:28:14.611 "io_timeout": 0, 00:28:14.611 "avg_latency_us": 5658.235249204666, 00:28:14.611 "min_latency_us": 1140.0533333333333, 00:28:14.611 "max_latency_us": 15619.413333333334 00:28:14.611 } 00:28:14.611 ], 00:28:14.611 "core_count": 1 00:28:14.611 } 00:28:14.611 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:14.611 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:14.611 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:14.611 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:14.611 | .driver_specific 00:28:14.611 | .nvme_error 00:28:14.611 | .status_code 00:28:14.611 | .command_transient_transport_error' 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 182 > 0 )) 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1398006 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1398006 ']' 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1398006 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1398006 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1398006' 00:28:14.870 killing process with pid 1398006 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1398006 00:28:14.870 Received shutdown signal, test time was about 2.000000 seconds 00:28:14.870 00:28:14.870 Latency(us) 00:28:14.870 [2024-10-08T16:44:08.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.870 [2024-10-08T16:44:08.927Z] =================================================================================================================== 00:28:14.870 [2024-10-08T16:44:08.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:14.870 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1398006 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1398803 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1398803 /var/tmp/bperf.sock 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1398803 ']' 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.130 18:44:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.130 [2024-10-08 18:44:09.032809] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:28:15.130 [2024-10-08 18:44:09.032870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398803 ] 00:28:15.130 [2024-10-08 18:44:09.107839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.130 [2024-10-08 18:44:09.161237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.069 18:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.069 18:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:16.069 18:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.069 18:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.069 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:16.069 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.069 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.069 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.069 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.069 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.328 nvme0n1 00:28:16.328 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:16.328 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.328 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.328 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.328 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:16.328 18:44:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:16.328 Running I/O for 2 seconds... 00:28:16.588 [2024-10-08 18:44:10.392494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e49b0 00:28:16.588 [2024-10-08 18:44:10.393455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.393482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.401851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ea680 00:28:16.588 [2024-10-08 18:44:10.403153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.403171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.410167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e5ec8 00:28:16.588 [2024-10-08 18:44:10.411255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.411272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.418788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e3060 00:28:16.588 [2024-10-08 18:44:10.419892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.419908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.427306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e5658 00:28:16.588 [2024-10-08 18:44:10.428408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.428425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.435831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f1868 00:28:16.588 [2024-10-08 18:44:10.436937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.436953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.443478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f4298 00:28:16.588 [2024-10-08 18:44:10.444285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.444302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.450949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f9b30 00:28:16.588 [2024-10-08 18:44:10.451511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.451527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.459598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198efae0 00:28:16.588 [2024-10-08 18:44:10.460161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.460181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.468765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fa3a0 00:28:16.588 [2024-10-08 18:44:10.469450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.469467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.476860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f5378 00:28:16.588 [2024-10-08 18:44:10.477545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.477561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.487053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ec408 00:28:16.588 [2024-10-08 18:44:10.488089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.488105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.494014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f8e88 00:28:16.588 [2024-10-08 18:44:10.494750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.494768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.502931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198dece0 00:28:16.588 [2024-10-08 18:44:10.503607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.503624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.511003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f5378 00:28:16.588 [2024-10-08 18:44:10.511693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.511708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.519478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ef270 00:28:16.588 [2024-10-08 18:44:10.520184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.520200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.527970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eb328 00:28:16.588 [2024-10-08 18:44:10.528673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.528689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.536476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f5378 00:28:16.588 [2024-10-08 18:44:10.537142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.537158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.544961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ef270 00:28:16.588 [2024-10-08 18:44:10.545662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.545678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.553436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eb328 00:28:16.588 [2024-10-08 18:44:10.554122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.554138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.561890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f5378 00:28:16.588 [2024-10-08 18:44:10.562569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.562585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.570361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ef270 00:28:16.588 [2024-10-08 18:44:10.571048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.588 [2024-10-08 18:44:10.571064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:16.588 [2024-10-08 18:44:10.579169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fd640 00:28:16.589 [2024-10-08 18:44:10.579723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.589 [2024-10-08 18:44:10.579739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.589 [2024-10-08 18:44:10.587956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e12d8 00:28:16.589 [2024-10-08 18:44:10.588684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.589 [2024-10-08 18:44:10.588699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.589 [2024-10-08 18:44:10.596438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fbcf0 00:28:16.589 [2024-10-08 18:44:10.597211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.589 [2024-10-08 18:44:10.597226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.589 [2024-10-08 18:44:10.604907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eb328 00:28:16.589 [2024-10-08 18:44:10.605676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.589 [2024-10-08 18:44:10.605692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.589 [2024-10-08 18:44:10.613388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ef270 00:28:16.589 [2024-10-08 18:44:10.614162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.589 [2024-10-08 18:44:10.614178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.589 [2024-10-08 18:44:10.621894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f3a28 00:28:16.589 [2024-10-08 18:44:10.622657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.589 [2024-10-08 18:44:10.622672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.589 [2024-10-08 18:44:10.630393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e6738 00:28:16.589 [2024-10-08 18:44:10.631155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.589 [2024-10-08 18:44:10.631171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.589 [2024-10-08 18:44:10.638887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e12d8 00:28:16.589 [2024-10-08 18:44:10.639671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.589 [2024-10-08 18:44:10.639687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.647372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fbcf0 00:28:16.849 [2024-10-08 18:44:10.648110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.648126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.655834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eb328 00:28:16.849 [2024-10-08 18:44:10.656607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.656623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.664355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ef270 00:28:16.849 [2024-10-08 18:44:10.665122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.665138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.672861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f3a28 00:28:16.849 [2024-10-08 18:44:10.673631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.673647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.681357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e6738 00:28:16.849 [2024-10-08 18:44:10.682120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.682138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.689822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e12d8 00:28:16.849 [2024-10-08 18:44:10.690605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.690620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.698289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fbcf0 00:28:16.849 [2024-10-08 18:44:10.699045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.699061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.706779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eb328 00:28:16.849 [2024-10-08 18:44:10.707565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.707581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.715308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ef270 00:28:16.849 [2024-10-08 18:44:10.716047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.716063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.723814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f3a28 00:28:16.849 [2024-10-08 18:44:10.724554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.724570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.733430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e6738 00:28:16.849 [2024-10-08 18:44:10.734683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.734699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.741036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f35f0 00:28:16.849 [2024-10-08 18:44:10.741633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.741648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.749792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ebb98 00:28:16.849 [2024-10-08 18:44:10.750704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.750720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.758256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198de8a8 00:28:16.849 [2024-10-08 18:44:10.759175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.759191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.766727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f7538 00:28:16.849 [2024-10-08 18:44:10.767685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.767702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.775242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fda78 00:28:16.849 [2024-10-08 18:44:10.776187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.776204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.783698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fb048 00:28:16.849 [2024-10-08 18:44:10.784657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.784674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.792154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f0350 00:28:16.849 [2024-10-08 18:44:10.793123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.793139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.800619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fc560 00:28:16.849 [2024-10-08 18:44:10.801595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.801611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.808516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ec840 00:28:16.849 [2024-10-08 18:44:10.809335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.809350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.817157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f6890 00:28:16.849 [2024-10-08 18:44:10.817970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.817988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.826362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ddc00 00:28:16.849 [2024-10-08 18:44:10.827410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.849 [2024-10-08 18:44:10.827426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:16.849 [2024-10-08 18:44:10.835261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fd640 00:28:16.850 [2024-10-08 18:44:10.836433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.850 [2024-10-08 18:44:10.836449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:16.850 [2024-10-08 18:44:10.842892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f3e60 00:28:16.850 [2024-10-08 18:44:10.844053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.850 [2024-10-08 18:44:10.844069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:16.850 [2024-10-08 18:44:10.851874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f0bc0 00:28:16.850 [2024-10-08 18:44:10.852617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.850 [2024-10-08 18:44:10.852632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:16.850 [2024-10-08 18:44:10.859569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f92c0 00:28:16.850 [2024-10-08 18:44:10.860406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.850 [2024-10-08 18:44:10.860422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:16.850 [2024-10-08 18:44:10.868072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e2c28 00:28:16.850 [2024-10-08 18:44:10.868859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.850 [2024-10-08 18:44:10.868875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:16.850 [2024-10-08 18:44:10.876505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fda78 00:28:16.850 [2024-10-08 18:44:10.877325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.850 [2024-10-08 18:44:10.877340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:16.850 [2024-10-08 18:44:10.885111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fb048 00:28:16.850 [2024-10-08 18:44:10.885978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.850 [2024-10-08 18:44:10.885993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:16.850 [2024-10-08 18:44:10.893568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fef90 00:28:16.850 [2024-10-08 18:44:10.894437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.850 [2024-10-08 18:44:10.894453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:16.850 [2024-10-08 18:44:10.902052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e3498 00:28:16.850 [2024-10-08 18:44:10.902892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.850 [2024-10-08 18:44:10.902910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.910523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fb8b8 00:28:17.110 [2024-10-08 18:44:10.911399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.911414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.918999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ee5c8 00:28:17.110 [2024-10-08 18:44:10.919877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.919893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.927450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f6458 00:28:17.110 [2024-10-08 18:44:10.928329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.928345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.935911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ec408 00:28:17.110 [2024-10-08 18:44:10.936778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.936794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.944388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:17.110 [2024-10-08 18:44:10.945248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.945264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.952873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e5ec8 00:28:17.110 [2024-10-08 18:44:10.953759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.953774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.961364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e4de8 00:28:17.110 [2024-10-08 18:44:10.962237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.962253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.969814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f1430 00:28:17.110 [2024-10-08 18:44:10.970697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.970714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.978280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198feb58 00:28:17.110 [2024-10-08 18:44:10.979172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.979190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.986763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eb760 00:28:17.110 [2024-10-08 18:44:10.987649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.987666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:10.995244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e1b48 00:28:17.110 [2024-10-08 18:44:10.996123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:10.996140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.003729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e6300 00:28:17.110 [2024-10-08 18:44:11.004615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.004631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.012200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fe720 00:28:17.110 [2024-10-08 18:44:11.013048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.013064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.020658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fd640 00:28:17.110 [2024-10-08 18:44:11.021518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.021534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.029106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eea00 00:28:17.110 [2024-10-08 18:44:11.029986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.030002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.037569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f2948 00:28:17.110 [2024-10-08 18:44:11.038311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.038327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.046355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f0788 00:28:17.110 [2024-10-08 18:44:11.047320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.047336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.054406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f6458 00:28:17.110 [2024-10-08 18:44:11.055270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.055286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.063228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fef90 00:28:17.110 [2024-10-08 18:44:11.063913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.063929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.072349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f6458 00:28:17.110 [2024-10-08 18:44:11.073410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.073426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.080755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f92c0 00:28:17.110 [2024-10-08 18:44:11.081824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.081840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.089230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e7818 00:28:17.110 [2024-10-08 18:44:11.090323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.090339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.097699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f6458 00:28:17.110 [2024-10-08 18:44:11.098795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.098811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.110 [2024-10-08 18:44:11.106163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f92c0 00:28:17.110 [2024-10-08 18:44:11.107260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.110 [2024-10-08 18:44:11.107276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.111 [2024-10-08 18:44:11.114639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e7818 00:28:17.111 [2024-10-08 18:44:11.115725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.111 [2024-10-08 18:44:11.115741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.111 [2024-10-08 18:44:11.123110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f6458 00:28:17.111 [2024-10-08 18:44:11.124234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.111 [2024-10-08 18:44:11.124250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.111 [2024-10-08 18:44:11.131605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f92c0 00:28:17.111 [2024-10-08 18:44:11.132691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.111 [2024-10-08 18:44:11.132707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.111 [2024-10-08 18:44:11.140110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e7818 00:28:17.111 [2024-10-08 18:44:11.141223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.111 [2024-10-08 18:44:11.141239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.111 [2024-10-08 18:44:11.148587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f6458 00:28:17.111 [2024-10-08 18:44:11.149550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.111 [2024-10-08 18:44:11.149566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.111 [2024-10-08 18:44:11.157091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f92c0 00:28:17.111 [2024-10-08 18:44:11.158142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.111 [2024-10-08 18:44:11.158158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.111 [2024-10-08 18:44:11.164194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ebb98 00:28:17.111 [2024-10-08 18:44:11.164884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.111 [2024-10-08 18:44:11.164899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.172084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e5a90 00:28:17.370 [2024-10-08 18:44:11.172758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.172773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.182617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ee5c8 00:28:17.370 [2024-10-08 18:44:11.183634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.183650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.191129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198dfdc0 00:28:17.370 [2024-10-08 18:44:11.192114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.192129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.199743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e6fa8 00:28:17.370 [2024-10-08 18:44:11.200771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.200790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.208220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e8088 00:28:17.370 [2024-10-08 18:44:11.209252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.209268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.216702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ff3c8 00:28:17.370 [2024-10-08 18:44:11.217747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.217763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.225175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fd640 00:28:17.370 [2024-10-08 18:44:11.226184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.226200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.233645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f92c0 00:28:17.370 [2024-10-08 18:44:11.234691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.234706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.242188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ec840 00:28:17.370 [2024-10-08 18:44:11.243185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.243200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.250642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f0bc0 00:28:17.370 [2024-10-08 18:44:11.251673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.251689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.259106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fda78 00:28:17.370 [2024-10-08 18:44:11.260126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.260143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.267566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f4b08 00:28:17.370 [2024-10-08 18:44:11.268592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.268609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.370 [2024-10-08 18:44:11.276040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198df118 00:28:17.370 [2024-10-08 18:44:11.277037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.370 [2024-10-08 18:44:11.277053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.284533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.371 [2024-10-08 18:44:11.285568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.285584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.293020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fc998 00:28:17.371 [2024-10-08 18:44:11.294056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.294072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.301482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f9b30 00:28:17.371 [2024-10-08 18:44:11.302526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.302542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.309947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ee5c8 00:28:17.371 [2024-10-08 18:44:11.310929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.310944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.318411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f6458 00:28:17.371 [2024-10-08 18:44:11.319443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.319459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.326875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e4140 00:28:17.371 [2024-10-08 18:44:11.327909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.327925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.335324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f8e88 00:28:17.371 [2024-10-08 18:44:11.336366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.336381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.343778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e9168 00:28:17.371 [2024-10-08 18:44:11.344805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.344820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.352234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e7c50 00:28:17.371 [2024-10-08 18:44:11.353246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.353262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.360722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f6020 00:28:17.371 [2024-10-08 18:44:11.361761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.361777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.369201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fb048 00:28:17.371 [2024-10-08 18:44:11.370232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.370247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.377678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e01f8 00:28:17.371 [2024-10-08 18:44:11.378887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.378903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.371 29822.00 IOPS, 116.49 MiB/s [2024-10-08T16:44:11.428Z] [2024-10-08 18:44:11.386133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f0ff8 00:28:17.371 [2024-10-08 18:44:11.387137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.387153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.394591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f4f40 00:28:17.371 [2024-10-08 18:44:11.395616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.395632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.403063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f20d8 00:28:17.371 [2024-10-08 18:44:11.404095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.404111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.411525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ea248 00:28:17.371 [2024-10-08 18:44:11.412514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.412530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.371 [2024-10-08 18:44:11.420080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e9e10 00:28:17.371 [2024-10-08 18:44:11.421118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.371 [2024-10-08 18:44:11.421137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.428552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e6fa8 00:28:17.632 [2024-10-08 18:44:11.429580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.429596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.437027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198df988 00:28:17.632 [2024-10-08 18:44:11.438043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.438058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.445472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198dece0 00:28:17.632 [2024-10-08 18:44:11.446520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.446535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.453957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ecc78 00:28:17.632 [2024-10-08 18:44:11.454987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.455004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.462425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fe720 00:28:17.632 [2024-10-08 18:44:11.463449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.463465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.470895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198df550 00:28:17.632 [2024-10-08 18:44:11.471922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.471938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.479387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fbcf0 00:28:17.632 [2024-10-08 18:44:11.480414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.480429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.487850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e3060 00:28:17.632 [2024-10-08 18:44:11.488880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.488897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.496475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e3d08 00:28:17.632 [2024-10-08 18:44:11.497524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.497540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.506039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e95a0 00:28:17.632 [2024-10-08 18:44:11.507493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.507508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.512397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e7c50 00:28:17.632 [2024-10-08 18:44:11.513182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.513198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.522835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e4578 00:28:17.632 [2024-10-08 18:44:11.523963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.523982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.529795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eb760 00:28:17.632 [2024-10-08 18:44:11.530419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.530435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.538240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e1b48 00:28:17.632 [2024-10-08 18:44:11.538912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.538927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.546765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e4de8 00:28:17.632 [2024-10-08 18:44:11.547430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.547446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.555233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e5ec8 00:28:17.632 [2024-10-08 18:44:11.555897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.555913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.563715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:17.632 [2024-10-08 18:44:11.564345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.564361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.572186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ec408 00:28:17.632 [2024-10-08 18:44:11.572864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.572880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.580646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f3a28 00:28:17.632 [2024-10-08 18:44:11.581285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.581302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.589113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fef90 00:28:17.632 [2024-10-08 18:44:11.589762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.589778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.597923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f2d80 00:28:17.632 [2024-10-08 18:44:11.598348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.598364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.607052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e1f80 00:28:17.632 [2024-10-08 18:44:11.608009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.608024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.614714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f2510 00:28:17.632 [2024-10-08 18:44:11.615507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.615522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.623889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e1f80 00:28:17.632 [2024-10-08 18:44:11.624848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.624863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.632491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f35f0 00:28:17.632 [2024-10-08 18:44:11.633447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.632 [2024-10-08 18:44:11.633463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.632 [2024-10-08 18:44:11.640959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fa3a0 00:28:17.632 [2024-10-08 18:44:11.641914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.633 [2024-10-08 18:44:11.641931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.633 [2024-10-08 18:44:11.649439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e1f80 00:28:17.633 [2024-10-08 18:44:11.650395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.633 [2024-10-08 18:44:11.650410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.633 [2024-10-08 18:44:11.657901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f35f0 00:28:17.633 [2024-10-08 18:44:11.658855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.633 [2024-10-08 18:44:11.658871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.633 [2024-10-08 18:44:11.666363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198fa3a0 00:28:17.633 [2024-10-08 18:44:11.667327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.633 [2024-10-08 18:44:11.667342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.633 [2024-10-08 18:44:11.674829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e1f80 00:28:17.633 [2024-10-08 18:44:11.675652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.633 [2024-10-08 18:44:11.675668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.633 [2024-10-08 18:44:11.683297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f35f0 00:28:17.633 [2024-10-08 18:44:11.684220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.633 [2024-10-08 18:44:11.684235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.691227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f35f0 00:28:17.893 [2024-10-08 18:44:11.692070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.692086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.699853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.893 [2024-10-08 18:44:11.700699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.700714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.708441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.893 [2024-10-08 18:44:11.709287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.709303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.716882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.893 [2024-10-08 18:44:11.717741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.717757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.725331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.893 [2024-10-08 18:44:11.726199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.726215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.733793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.893 [2024-10-08 18:44:11.734667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.734683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.742284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.893 [2024-10-08 18:44:11.743127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.743143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.750729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.893 [2024-10-08 18:44:11.751598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.751614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.759179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.893 [2024-10-08 18:44:11.760025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.760041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.767628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.893 [2024-10-08 18:44:11.768485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.768501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.776085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.893 [2024-10-08 18:44:11.776931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.893 [2024-10-08 18:44:11.776947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.893 [2024-10-08 18:44:11.784541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.785409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.785426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.793005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.793809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.793824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.801433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.802285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.802300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.809878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.810741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.810757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.818332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.819221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.819236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.826807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.827682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.827698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.835262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.836126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.836142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.843726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.844571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.844587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.852172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.853020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.853035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.860618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.861465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.861484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.869077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.869937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.869953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.878633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f57b0 00:28:17.894 [2024-10-08 18:44:11.879950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.879966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.886165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f0bc0 00:28:17.894 [2024-10-08 18:44:11.886762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.886778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.894612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e73e0 00:28:17.894 [2024-10-08 18:44:11.895325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.895341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.903563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f7970 00:28:17.894 [2024-10-08 18:44:11.904612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.904628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.912054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e5658 00:28:17.894 [2024-10-08 18:44:11.913098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.913113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.920548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198de8a8 00:28:17.894 [2024-10-08 18:44:11.921602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.921617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.929213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e6738 00:28:17.894 [2024-10-08 18:44:11.930191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.930207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.937758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198df118 00:28:17.894 [2024-10-08 18:44:11.938788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.938804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:17.894 [2024-10-08 18:44:11.946276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e5a90 00:28:17.894 [2024-10-08 18:44:11.947288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.894 [2024-10-08 18:44:11.947305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:11.954758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f81e0 00:28:18.155 [2024-10-08 18:44:11.955780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:11.955796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:11.963279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198edd58 00:28:18.155 [2024-10-08 18:44:11.964289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:11.964306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:11.971788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ddc00 00:28:18.155 [2024-10-08 18:44:11.972810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:11.972825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:11.980263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198de8a8 00:28:18.155 [2024-10-08 18:44:11.981165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:11.981181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:11.988746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e6738 00:28:18.155 [2024-10-08 18:44:11.989759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:11.989775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:11.997229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198df118 00:28:18.155 [2024-10-08 18:44:11.998219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:11.998236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.005777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198e5a90 00:28:18.155 [2024-10-08 18:44:12.006802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.006818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.014257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f81e0 00:28:18.155 [2024-10-08 18:44:12.015262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.015279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.022783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198edd58 00:28:18.155 [2024-10-08 18:44:12.023810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.023826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.030641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198f6458 00:28:18.155 [2024-10-08 18:44:12.031679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.031694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.038889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198ebb98 00:28:18.155 [2024-10-08 18:44:12.039744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.039760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.047849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.048090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.048105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.056769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.057056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.057071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.065476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.065754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.065771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.074247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.074487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.074501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.082965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.083094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.083112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.091655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.091910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.091924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.100398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.100683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.100699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.109105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.109344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.109360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.117881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.118137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.118152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.126613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.126870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.126886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.135290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.135574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.135590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.144052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.144318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.144334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.152814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.153079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.153094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.155 [2024-10-08 18:44:12.161528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.155 [2024-10-08 18:44:12.161774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.155 [2024-10-08 18:44:12.161789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.156 [2024-10-08 18:44:12.170266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.156 [2024-10-08 18:44:12.170556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.156 [2024-10-08 18:44:12.170578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.156 [2024-10-08 18:44:12.179033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.156 [2024-10-08 18:44:12.179294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.156 [2024-10-08 18:44:12.179309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.156 [2024-10-08 18:44:12.187787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.156 [2024-10-08 18:44:12.188056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.156 [2024-10-08 18:44:12.188071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.156 [2024-10-08 18:44:12.196516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.156 [2024-10-08 18:44:12.196644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.156 [2024-10-08 18:44:12.196659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.156 [2024-10-08 18:44:12.205258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.156 [2024-10-08 18:44:12.205473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.156 [2024-10-08 18:44:12.205488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.214017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.214275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.214290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.222777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.223028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.223043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.231560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.231830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.231846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.240325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.240611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.240626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.249163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.249422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.249437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.257941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.258252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.258268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.266709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.266994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.267016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.275430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.275694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.275718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.284204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.284484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.284500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.293024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.293282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.293297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.301829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.302106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.302121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.310574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.310830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.310848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.319326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.319682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.319698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.328062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.328294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.328309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.336851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.337119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.337134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.345581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.345813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.345828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.354319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.354638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.354653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.363041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.363295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.363309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.371773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.372020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.372035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 [2024-10-08 18:44:12.380484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874a80) with pdu=0x2000198eaab8 00:28:18.416 [2024-10-08 18:44:12.381190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.416 [2024-10-08 18:44:12.381206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:18.416 29821.50 IOPS, 116.49 MiB/s 00:28:18.416 Latency(us) 00:28:18.416 [2024-10-08T16:44:12.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.416 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:18.416 nvme0n1 : 2.00 29820.19 116.49 0.00 0.00 4285.41 1720.32 14417.92 00:28:18.416 [2024-10-08T16:44:12.474Z] =================================================================================================================== 00:28:18.417 [2024-10-08T16:44:12.474Z] Total : 29820.19 116.49 0.00 0.00 4285.41 1720.32 14417.92 00:28:18.417 { 00:28:18.417 "results": [ 00:28:18.417 { 00:28:18.417 "job": "nvme0n1", 00:28:18.417 "core_mask": "0x2", 00:28:18.417 "workload": "randwrite", 00:28:18.417 "status": "finished", 00:28:18.417 "queue_depth": 128, 00:28:18.417 "io_size": 4096, 00:28:18.417 "runtime": 2.00438, 00:28:18.417 "iops": 29820.193775631367, 00:28:18.417 "mibps": 116.48513193606003, 00:28:18.417 "io_failed": 0, 00:28:18.417 "io_timeout": 0, 00:28:18.417 "avg_latency_us": 4285.406563941265, 00:28:18.417 "min_latency_us": 1720.32, 00:28:18.417 "max_latency_us": 14417.92 00:28:18.417 } 00:28:18.417 ], 00:28:18.417 "core_count": 1 00:28:18.417 } 00:28:18.417 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:18.417 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:18.417 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:18.417 | .driver_specific 00:28:18.417 | .nvme_error 00:28:18.417 | .status_code 00:28:18.417 | .command_transient_transport_error' 00:28:18.417 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 234 > 0 )) 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1398803 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1398803 ']' 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1398803 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1398803 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1398803' 00:28:18.676 killing process with pid 1398803 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1398803 00:28:18.676 Received shutdown signal, test time was about 2.000000 seconds 00:28:18.676 00:28:18.676 Latency(us) 00:28:18.676 [2024-10-08T16:44:12.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.676 [2024-10-08T16:44:12.733Z] =================================================================================================================== 00:28:18.676 [2024-10-08T16:44:12.733Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.676 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1398803 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1399596 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1399596 /var/tmp/bperf.sock 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1399596 ']' 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:18.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:18.935 18:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.935 [2024-10-08 18:44:12.822629] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:28:18.935 [2024-10-08 18:44:12.822688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399596 ] 00:28:18.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:18.935 Zero copy mechanism will not be used. 00:28:18.935 [2024-10-08 18:44:12.898048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.935 [2024-10-08 18:44:12.951343] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.871 18:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.871 18:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:19.871 18:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:19.871 18:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:19.871 18:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:19.871 18:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.871 18:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.871 18:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.871 18:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.871 18:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.132 nvme0n1 00:28:20.132 18:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:20.132 18:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.132 18:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.132 18:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.132 18:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:20.132 18:44:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:20.394 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.394 Zero copy mechanism will not be used. 00:28:20.394 Running I/O for 2 seconds... 00:28:20.394 [2024-10-08 18:44:14.250872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.251092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.251118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.256836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.257043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.257061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.263850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.264056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.264073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.271111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.271401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.271419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.277453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.277617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.277632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.284928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.285114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.285128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.290792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.290943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.290958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.298540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.298705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.298720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.304650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.304766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.304781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.310438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.310594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.310609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.315433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.315537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.315552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.319865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.320000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.320016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.324190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.324279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.324294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.328259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.328382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.328397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.332751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.332872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.332887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.336628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.336736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.336751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.340482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.340569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.340590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.344429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.344515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.344530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.348203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.348319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.348334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.351796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.351857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.351872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.355506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.355570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.355586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.361996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.362173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.362189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.367602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.367685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.367701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.371701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.371777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.371792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.376084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.376144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.376159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.379827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.379889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.394 [2024-10-08 18:44:14.379904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.394 [2024-10-08 18:44:14.383450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.394 [2024-10-08 18:44:14.383506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.383521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.386847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.386905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.386921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.390266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.390318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.390333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.393790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.393846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.393861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.396969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.397046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.397061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.399995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.400056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.400072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.402881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.402944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.402960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.405783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.405837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.405852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.408414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.408471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.408487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.410952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.411011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.411027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.413472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.413524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.413539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.415972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.416032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.416047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.418439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.418495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.418510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.420907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.420961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.420982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.423532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.423591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.423606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.426264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.426314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.426330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.429447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.429498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.429517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.432413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.432471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.432487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.435176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.435234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.435249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.437906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.437963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.437983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.440549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.440601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.440616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.443682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.443785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.443800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.446830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.395 [2024-10-08 18:44:14.446888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.395 [2024-10-08 18:44:14.446903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.395 [2024-10-08 18:44:14.449374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.449437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.449455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.451854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.451908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.451923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.454375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.454444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.454460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.456800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.456867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.456882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.459247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.459305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.459320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.461814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.461865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.461880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.464536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.464599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.464615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.467999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.468086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.468101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.470924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.470988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.471004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.473548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.473599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.473614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.476101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.476153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.476168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.657 [2024-10-08 18:44:14.478696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.657 [2024-10-08 18:44:14.478756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.657 [2024-10-08 18:44:14.478771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.481316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.481366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.481381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.483950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.484005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.484021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.486536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.486608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.486624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.489217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.489271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.489286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.491808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.491861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.491877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.494476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.494539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.494555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.497113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.497170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.497186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.499855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.499912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.499931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.502638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.502703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.502718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.505251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.505306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.505321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.507909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.507961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.507981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.510419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.510471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.510487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.512815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.512869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.512884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.515240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.515291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.515306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.517641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.517699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.517714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.520046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.520107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.520122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.522439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.522500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.522515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.524844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.524895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.524910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.527269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.527321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.527337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.529827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.529885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.529900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.532379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.532430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.532446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.534791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.534843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.534858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.537365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.537452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.537466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.540200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.540269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.540284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.546800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.546995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.547010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.551864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.551946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.551961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.556676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.556780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.556795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.658 [2024-10-08 18:44:14.560924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.658 [2024-10-08 18:44:14.561007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.658 [2024-10-08 18:44:14.561023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.564947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.565139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.565154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.572465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.572686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.572702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.580017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.580308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.580325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.589407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.589528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.589544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.593987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.594057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.594073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.598298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.598389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.598407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.602597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.602696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.602712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.606892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.606982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.606998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.611263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.611387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.611403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.615223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.615315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.615331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.618883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.618962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.618983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.623330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.623413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.623429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.628399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.628514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.628529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.636486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.636723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.636739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.642001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.642085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.642100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.646387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.646512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.646527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.651191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.651335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.651350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.655444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.655575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.655590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.660200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.660375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.660390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.665219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.665308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.665323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.673052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.673146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.673161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.677940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.678034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.678049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.682237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.682372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.682387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.686405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.686467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.686482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.690046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.690095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.690110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.693745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.693861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.693876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.698689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.698773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.698788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.704659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.704775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.659 [2024-10-08 18:44:14.704790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.659 [2024-10-08 18:44:14.710906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.659 [2024-10-08 18:44:14.710987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.660 [2024-10-08 18:44:14.711002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.715909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.716066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.716081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.720945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.721028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.721044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.725931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.726094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.726112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.731949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.732066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.732081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.737628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.737715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.737730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.742029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.742104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.742120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.746134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.746223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.746238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.750271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.750407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.750422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.754079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.754164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.754180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.757658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.757758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.757773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.762899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.763087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.763102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.772822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.772924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.772939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.778243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.778355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.921 [2024-10-08 18:44:14.778370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.921 [2024-10-08 18:44:14.782722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.921 [2024-10-08 18:44:14.782820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.782836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.788980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.789314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.789331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.796858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.796953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.796969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.802047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.802163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.802178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.806702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.806823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.806838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.811302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.811380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.811395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.816151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.816299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.816314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.820910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.821008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.821023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.825125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.825264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.825279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.829416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.829560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.829575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.834553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.834737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.834752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.839622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.839823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.839838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.847659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.848024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.848039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.854923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.855070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.855086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.859153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.859216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.859231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.863163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.863222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.863240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.866863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.866924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.866940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.870551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.870608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.870623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.874269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.874335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.874351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.877408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.877451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.877466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.880396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.880441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.880456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.883249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.883305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.883320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.885961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.886021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.886036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.888601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.888642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.888657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.890993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.891042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.891057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.893414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.893456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.893471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.895795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.895840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.895855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.898197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.898256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.898271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.900596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.922 [2024-10-08 18:44:14.900643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.922 [2024-10-08 18:44:14.900658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.922 [2024-10-08 18:44:14.902971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.903019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.903034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.905339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.905379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.905394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.907688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.907730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.907745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.910060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.910099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.910114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.912423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.912472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.912487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.914765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.914819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.914834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.917147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.917193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.917208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.919512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.919560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.919575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.921867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.921917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.921932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.924428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.924474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.924489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.927217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.927258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.927273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.930832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.930938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.930953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.934328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.934410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.934428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.937861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.937950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.937965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.941745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.941848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.941864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.945528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.945634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.945649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.949636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.949717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.949732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.953740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.953827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.953842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.957143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.957217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.957232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.960035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.960089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.960104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.962944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.963026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.963041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.966142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.966228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.966242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.969359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.969462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.969477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.972581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.972662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.972677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.923 [2024-10-08 18:44:14.975557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:20.923 [2024-10-08 18:44:14.975672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.923 [2024-10-08 18:44:14.975687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:14.978755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:14.978839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:14.978854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:14.984542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:14.984816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:14.984830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:14.990661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:14.990748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:14.990763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:14.994525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:14.994638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:14.994653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:14.998176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:14.998285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:14.998303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:15.001635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:15.001727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:15.001742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:15.005029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:15.005085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:15.005100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:15.008899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:15.008994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:15.009009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:15.012592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:15.012683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:15.012698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:15.016224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:15.016312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:15.016327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:15.019778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:15.019881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:15.019896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:15.023264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:15.023381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.184 [2024-10-08 18:44:15.023396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.184 [2024-10-08 18:44:15.026723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.184 [2024-10-08 18:44:15.026804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.026819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.030157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.030249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.030264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.033484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.033559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.033574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.037280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.037359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.037373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.040978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.041055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.041070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.044362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.044434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.044448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.047544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.047605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.047620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.051204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.051274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.051289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.056269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.056346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.056361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.060411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.060532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.060547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.064254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.064327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.064342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.067870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.067955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.067970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.071668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.071714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.071729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.074930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.074991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.075007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.078114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.078195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.078210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.081424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.081478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.081493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.084738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.084812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.084827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.088996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.089075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.089089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.093544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.093649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.093667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.097822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.097914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.097928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.102272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.102392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.102407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.107718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.107798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.107813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.113518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.113766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.113782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.119509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.119610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.119625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.126349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.126612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.126628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.133118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.133394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.133410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.138225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.138285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.138301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.185 [2024-10-08 18:44:15.142273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.185 [2024-10-08 18:44:15.142348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.185 [2024-10-08 18:44:15.142364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.146157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.146265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.146279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.153165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.153429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.153444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.162530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.162613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.162628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.167014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.167121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.167136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.171210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.171294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.171309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.175635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.175807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.175822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.181315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.181506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.181521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.187028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.187217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.187232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.192304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.192383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.192397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.200060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.200157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.200171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.204456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.204516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.204531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.208589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.208664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.208679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.212652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.212726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.212741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.215886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.215935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.215950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.218898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.218951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.218966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.221868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.221918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.221933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.224719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.224828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.224845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.227814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.227882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.227897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.233554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.186 [2024-10-08 18:44:15.233647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.186 [2024-10-08 18:44:15.233662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.186 [2024-10-08 18:44:15.239437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.447 [2024-10-08 18:44:15.239678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.447 [2024-10-08 18:44:15.239694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.447 7597.00 IOPS, 949.62 MiB/s [2024-10-08T16:44:15.504Z] [2024-10-08 18:44:15.245873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.447 [2024-10-08 18:44:15.245964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.447 [2024-10-08 18:44:15.245984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.447 [2024-10-08 18:44:15.250296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.447 [2024-10-08 18:44:15.250409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.447 [2024-10-08 18:44:15.250424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.447 [2024-10-08 18:44:15.254200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.447 [2024-10-08 18:44:15.254289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.447 [2024-10-08 18:44:15.254304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.447 [2024-10-08 18:44:15.257967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.447 [2024-10-08 18:44:15.258036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.447 [2024-10-08 18:44:15.258051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.447 [2024-10-08 18:44:15.261677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.447 [2024-10-08 18:44:15.261745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.447 [2024-10-08 18:44:15.261760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.447 [2024-10-08 18:44:15.265481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.265548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.265563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.269340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.269408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.269423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.273318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.273425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.273440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.277304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.277416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.277430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.281158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.281257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.281272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.285192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.285311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.285326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.288820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.288895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.288910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.294386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.294696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.294712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.302908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.303030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.303048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.306559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.306636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.306651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.310254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.310336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.310351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.314249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.314309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.314324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.318338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.318422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.318436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.323076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.323270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.323285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.329286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.329372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.329388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.332834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.332935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.332950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.336755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.336841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.336856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.340698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.340763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.340778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.344470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.344575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.344589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.350628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.350903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.350919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.358953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.359038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.359053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.366760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.367063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.367079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.372983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.373068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.373083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.377276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.377387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.377401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.381882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.381955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.381970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.386269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.386381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.386396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.390457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.390534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.390549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.394431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.448 [2024-10-08 18:44:15.394527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.448 [2024-10-08 18:44:15.394542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.448 [2024-10-08 18:44:15.398989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.399231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.399246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.407836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.407938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.407953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.412365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.412437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.412452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.416604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.416670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.416684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.420519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.420607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.420622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.423967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.424042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.424057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.427420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.427483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.427500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.430810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.430910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.430925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.434325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.434422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.434437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.440378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.440592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.440607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.447921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.447995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.448010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.456602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.456904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.456920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.465858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.466115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.466131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.476160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.476313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.476328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.483730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.483822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.483837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.487937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.488011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.488026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.492021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.492115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.492130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.495932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.496000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.496016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.449 [2024-10-08 18:44:15.499624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.449 [2024-10-08 18:44:15.499694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.449 [2024-10-08 18:44:15.499709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.503207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.503253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.503268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.506156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.506205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.506220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.509149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.509192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.509207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.512354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.512401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.512416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.515464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.515595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.515610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.518686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.518738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.518753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.521931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.521980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.521995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.525804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.525871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.525886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.531413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.531592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.531607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.538543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.538744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.538759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.546569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.546783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.546798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.555833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.556088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.556103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.562393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.562489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.562504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.566504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.566600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.566618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.570815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.570927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.570942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.575137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.575226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.575241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.578702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.578792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.578807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.582267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.582344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.582359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.585926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.586037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.586052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.589022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.589104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.589119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.592527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.592605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.592620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.600563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.600649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.600664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.604175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.604252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.604267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.607620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.607691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.607706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.611070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.611135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.611150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.711 [2024-10-08 18:44:15.614538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.711 [2024-10-08 18:44:15.614624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.711 [2024-10-08 18:44:15.614640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.617898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.618002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.618018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.621359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.621432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.621447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.624854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.624947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.624962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.628372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.628445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.628460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.631822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.631937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.631952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.635346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.635436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.635451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.638508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.638583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.638598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.641794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.641878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.641894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.647737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.647845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.647860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.654332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.654419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.654434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.658617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.658674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.658689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.663006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.663106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.663121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.668235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.668505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.668526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.676771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.676861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.676879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.681962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.682054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.682068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.687070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.687164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.687179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.692073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.692173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.692187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.697092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.697151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.697165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.701753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.701801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.701816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.706337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.706398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.706413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.711859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.711923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.711938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.718648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.718717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.718732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.723339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.723411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.723426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.727495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.727619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.727634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.731471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.731562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.731577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.734833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.734911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.734926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.738692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.738765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.738779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.746025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.712 [2024-10-08 18:44:15.746205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.712 [2024-10-08 18:44:15.746220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.712 [2024-10-08 18:44:15.750119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.713 [2024-10-08 18:44:15.750197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.713 [2024-10-08 18:44:15.750212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.713 [2024-10-08 18:44:15.755191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.713 [2024-10-08 18:44:15.755441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.713 [2024-10-08 18:44:15.755456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.713 [2024-10-08 18:44:15.761002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.713 [2024-10-08 18:44:15.761076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.713 [2024-10-08 18:44:15.761091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.713 [2024-10-08 18:44:15.766024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.974 [2024-10-08 18:44:15.766075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.974 [2024-10-08 18:44:15.766091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.974 [2024-10-08 18:44:15.769786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.974 [2024-10-08 18:44:15.769845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.974 [2024-10-08 18:44:15.769860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.974 [2024-10-08 18:44:15.773380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.974 [2024-10-08 18:44:15.773441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.974 [2024-10-08 18:44:15.773456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.974 [2024-10-08 18:44:15.777128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.974 [2024-10-08 18:44:15.777191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.974 [2024-10-08 18:44:15.777206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.974 [2024-10-08 18:44:15.780630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.974 [2024-10-08 18:44:15.780676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.974 [2024-10-08 18:44:15.780691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.974 [2024-10-08 18:44:15.783549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.974 [2024-10-08 18:44:15.783597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.974 [2024-10-08 18:44:15.783612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.974 [2024-10-08 18:44:15.786432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.974 [2024-10-08 18:44:15.786480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.974 [2024-10-08 18:44:15.786495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.974 [2024-10-08 18:44:15.789207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.974 [2024-10-08 18:44:15.789266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.974 [2024-10-08 18:44:15.789281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.974 [2024-10-08 18:44:15.792039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.974 [2024-10-08 18:44:15.792084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.974 [2024-10-08 18:44:15.792102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.974 [2024-10-08 18:44:15.794605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.794653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.794669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.797033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.797077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.797091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.799450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.799505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.799520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.801972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.802047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.802062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.804960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.805049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.805064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.811403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.811608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.811624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.816173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.816241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.816256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.820940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.821042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.821057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.824309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.824357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.824373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.827000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.827042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.827058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.829818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.829877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.829892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.833454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.833549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.833564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.837775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.837869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.837884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.845152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.845243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.845258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.849169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.849253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.849268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.853998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.854083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.854098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.858551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.858629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.858643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.863119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.863205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.863220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.868668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.868995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.869011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.876860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.876946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.876961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.880985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.881084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.881099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.885388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.885443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.885458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.889635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.889740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.889755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.894033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.894216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.894231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.901025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.901286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.901301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.905900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.905984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.906006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.910107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.910170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.910185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.914174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.914253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.975 [2024-10-08 18:44:15.914268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.975 [2024-10-08 18:44:15.918245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.975 [2024-10-08 18:44:15.918328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.918343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.921691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.921759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.921774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.925087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.925197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.925212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.928860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.928938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.928952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.933950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.934022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.934037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.938456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.938524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.938540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.941774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.941846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.941862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.944991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.945092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.945107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.948319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.948413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.948428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.951634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.951706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.951721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.954682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.954765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.954780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.957962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.958178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.958193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.965150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.965368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.965383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.969089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.969181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.969196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.972418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.972479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.972494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.975857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.975920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.975936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.979217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.979300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.979315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.982599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.982663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.982679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.985958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.986051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.986066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.989399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.989482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.989497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.992719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.992791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.992807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.995919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.995964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.995985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:15.999181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:15.999250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:15.999265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:16.003768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:16.003829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:16.003847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:16.010480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:16.010567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:16.010582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:16.013911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:16.014026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:16.014041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:16.017396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:16.017465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:16.017480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:16.020832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:16.020935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:16.020950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.976 [2024-10-08 18:44:16.025353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:21.976 [2024-10-08 18:44:16.025609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.976 [2024-10-08 18:44:16.025625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.238 [2024-10-08 18:44:16.034021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.238 [2024-10-08 18:44:16.034117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.034132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.037510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.037618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.037632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.040982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.041066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.041080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.044322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.044439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.044454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.049161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.049568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.049583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.056226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.056305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.056320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.059885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.059988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.060003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.063766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.063875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.063890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.068004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.068099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.068114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.072564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.072630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.072645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.077227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.077289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.077305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.082907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.083177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.083193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.091310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.091527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.091542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.098587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.098673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.098687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.107719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.107810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.107825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.112360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.112423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.112438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.116526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.116606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.116622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.120741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.120809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.120824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.124470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.124554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.124569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.127880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.127993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.128008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.131412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.131593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.131611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.137580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.137661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.137676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.141613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.141722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.141737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.144904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.144970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.144991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.148197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.148289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.148304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.151536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.151592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.151607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.154841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.154909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.154923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.158195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.239 [2024-10-08 18:44:16.158283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.239 [2024-10-08 18:44:16.158298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.239 [2024-10-08 18:44:16.161502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.161593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.161608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.164545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.164613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.164628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.167740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.167840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.167855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.172190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.172291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.172306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.177139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.177211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.177226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.180288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.180340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.180355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.183402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.183465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.183480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.186426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.186474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.186489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.189736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.189811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.189826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.197607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.197955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.197971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.202938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.203033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.203048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.206396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.206463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.206478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.209747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.209803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.209818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.212725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.212806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.212821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.215525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.215588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.215603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.218161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.218212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.218227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.220801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.220868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.220882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.223926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.224036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.224051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.227206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.227286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.227304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.230425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.230531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.230546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.233403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.233471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.233486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.236740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.236848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.236863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.240 [2024-10-08 18:44:16.240009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.240087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.240102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.240 7313.00 IOPS, 914.12 MiB/s [2024-10-08T16:44:16.297Z] [2024-10-08 18:44:16.244256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1874dc0) with pdu=0x2000198fef90 00:28:22.240 [2024-10-08 18:44:16.244336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.240 [2024-10-08 18:44:16.244350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.240 00:28:22.240 Latency(us) 00:28:22.240 [2024-10-08T16:44:16.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.240 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:22.240 nvme0n1 : 2.00 7306.86 913.36 0.00 0.00 2185.66 1140.05 10485.76 00:28:22.240 [2024-10-08T16:44:16.297Z] =================================================================================================================== 00:28:22.240 [2024-10-08T16:44:16.297Z] Total : 7306.86 913.36 0.00 0.00 2185.66 1140.05 10485.76 00:28:22.240 { 00:28:22.240 "results": [ 00:28:22.240 { 00:28:22.240 "job": "nvme0n1", 00:28:22.240 "core_mask": "0x2", 00:28:22.240 "workload": "randwrite", 00:28:22.240 "status": "finished", 00:28:22.240 "queue_depth": 16, 00:28:22.240 "io_size": 131072, 00:28:22.240 "runtime": 2.004419, 00:28:22.240 "iops": 7306.855502766637, 00:28:22.240 "mibps": 913.3569378458296, 00:28:22.240 "io_failed": 0, 00:28:22.240 "io_timeout": 0, 00:28:22.240 "avg_latency_us": 2185.6613227729986, 00:28:22.240 "min_latency_us": 1140.0533333333333, 00:28:22.240 "max_latency_us": 10485.76 00:28:22.240 } 00:28:22.240 ], 00:28:22.240 "core_count": 1 00:28:22.240 } 00:28:22.240 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:22.240 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:22.241 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:22.241 | .driver_specific 00:28:22.241 | .nvme_error 00:28:22.241 | .status_code 00:28:22.241 | .command_transient_transport_error' 00:28:22.241 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 472 > 0 )) 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1399596 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1399596 ']' 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1399596 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1399596 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1399596' 00:28:22.501 killing process with pid 1399596 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1399596 00:28:22.501 Received shutdown signal, test time was about 2.000000 seconds 00:28:22.501 00:28:22.501 Latency(us) 00:28:22.501 [2024-10-08T16:44:16.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.501 [2024-10-08T16:44:16.558Z] =================================================================================================================== 00:28:22.501 [2024-10-08T16:44:16.558Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:22.501 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1399596 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1397081 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1397081 ']' 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1397081 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1397081 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1397081' 00:28:22.761 killing process with pid 1397081 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1397081 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1397081 00:28:22.761 00:28:22.761 real 0m16.645s 00:28:22.761 user 0m32.849s 00:28:22.761 sys 0m3.688s 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:22.761 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.761 ************************************ 00:28:22.761 END TEST nvmf_digest_error 00:28:22.761 ************************************ 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.022 rmmod nvme_tcp 00:28:23.022 rmmod nvme_fabrics 00:28:23.022 rmmod nvme_keyring 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1397081 ']' 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1397081 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1397081 ']' 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1397081 00:28:23.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1397081) - No such process 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1397081 is not found' 00:28:23.022 Process with pid 1397081 is not found 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.022 18:44:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.572 18:44:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.572 00:28:25.572 real 0m43.616s 00:28:25.572 user 1m8.333s 00:28:25.572 sys 0m13.234s 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:25.573 ************************************ 00:28:25.573 END TEST nvmf_digest 00:28:25.573 ************************************ 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.573 ************************************ 00:28:25.573 START TEST nvmf_bdevperf 00:28:25.573 ************************************ 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:25.573 * Looking for test storage... 00:28:25.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:25.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.573 --rc genhtml_branch_coverage=1 00:28:25.573 --rc genhtml_function_coverage=1 00:28:25.573 --rc genhtml_legend=1 00:28:25.573 --rc geninfo_all_blocks=1 00:28:25.573 --rc geninfo_unexecuted_blocks=1 00:28:25.573 00:28:25.573 ' 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:25.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.573 --rc genhtml_branch_coverage=1 00:28:25.573 --rc genhtml_function_coverage=1 00:28:25.573 --rc genhtml_legend=1 00:28:25.573 --rc geninfo_all_blocks=1 00:28:25.573 --rc geninfo_unexecuted_blocks=1 00:28:25.573 00:28:25.573 ' 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:25.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.573 --rc genhtml_branch_coverage=1 00:28:25.573 --rc genhtml_function_coverage=1 00:28:25.573 --rc genhtml_legend=1 00:28:25.573 --rc geninfo_all_blocks=1 00:28:25.573 --rc geninfo_unexecuted_blocks=1 00:28:25.573 00:28:25.573 ' 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:25.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.573 --rc genhtml_branch_coverage=1 00:28:25.573 --rc genhtml_function_coverage=1 00:28:25.573 --rc genhtml_legend=1 00:28:25.573 --rc geninfo_all_blocks=1 00:28:25.573 --rc geninfo_unexecuted_blocks=1 00:28:25.573 00:28:25.573 ' 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.573 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:25.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.574 18:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:33.712 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:33.712 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:33.712 Found net devices under 0000:31:00.0: cvl_0_0 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.712 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:33.713 Found net devices under 0000:31:00.1: cvl_0_1 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:28:33.713 00:28:33.713 --- 10.0.0.2 ping statistics --- 00:28:33.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.713 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:28:33.713 00:28:33.713 --- 10.0.0.1 ping statistics --- 00:28:33.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.713 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:33.713 18:44:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1404620 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1404620 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1404620 ']' 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:33.713 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.713 [2024-10-08 18:44:27.096979] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:28:33.713 [2024-10-08 18:44:27.097045] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.713 [2024-10-08 18:44:27.187555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:33.713 [2024-10-08 18:44:27.282137] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.713 [2024-10-08 18:44:27.282201] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.713 [2024-10-08 18:44:27.282210] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.713 [2024-10-08 18:44:27.282218] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.713 [2024-10-08 18:44:27.282224] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.713 [2024-10-08 18:44:27.283616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.713 [2024-10-08 18:44:27.283774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.713 [2024-10-08 18:44:27.283774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.974 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:33.974 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:33.974 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:33.974 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:33.974 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.974 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.974 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.974 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.974 18:44:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.974 [2024-10-08 18:44:27.978196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.974 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.974 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:33.974 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.974 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.974 Malloc0 00:28:33.974 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.974 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:33.974 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.974 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.235 [2024-10-08 18:44:28.058549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:34.235 { 00:28:34.235 "params": { 00:28:34.235 "name": "Nvme$subsystem", 00:28:34.235 "trtype": "$TEST_TRANSPORT", 00:28:34.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.235 "adrfam": "ipv4", 00:28:34.235 "trsvcid": "$NVMF_PORT", 00:28:34.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.235 "hdgst": ${hdgst:-false}, 00:28:34.235 "ddgst": ${ddgst:-false} 00:28:34.235 }, 00:28:34.235 "method": "bdev_nvme_attach_controller" 00:28:34.235 } 00:28:34.235 EOF 00:28:34.235 )") 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:34.235 18:44:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:34.235 "params": { 00:28:34.235 "name": "Nvme1", 00:28:34.235 "trtype": "tcp", 00:28:34.235 "traddr": "10.0.0.2", 00:28:34.235 "adrfam": "ipv4", 00:28:34.235 "trsvcid": "4420", 00:28:34.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:34.235 "hdgst": false, 00:28:34.235 "ddgst": false 00:28:34.235 }, 00:28:34.235 "method": "bdev_nvme_attach_controller" 00:28:34.235 }' 00:28:34.235 [2024-10-08 18:44:28.126762] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:28:34.235 [2024-10-08 18:44:28.126829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404806 ] 00:28:34.235 [2024-10-08 18:44:28.205523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.496 [2024-10-08 18:44:28.301244] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.496 Running I/O for 1 seconds... 00:28:35.878 8732.00 IOPS, 34.11 MiB/s 00:28:35.878 Latency(us) 00:28:35.878 [2024-10-08T16:44:29.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.878 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:35.878 Verification LBA range: start 0x0 length 0x4000 00:28:35.878 Nvme1n1 : 1.01 8810.34 34.42 0.00 0.00 14464.68 1638.40 12724.91 00:28:35.878 [2024-10-08T16:44:29.935Z] =================================================================================================================== 00:28:35.878 [2024-10-08T16:44:29.935Z] Total : 8810.34 34.42 0.00 0.00 14464.68 1638.40 12724.91 00:28:35.878 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1405140 00:28:35.878 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:35.878 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:35.878 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:35.878 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:35.878 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:35.878 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:35.878 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:35.878 { 00:28:35.878 "params": { 00:28:35.878 "name": "Nvme$subsystem", 00:28:35.878 "trtype": "$TEST_TRANSPORT", 00:28:35.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.879 "adrfam": "ipv4", 00:28:35.879 "trsvcid": "$NVMF_PORT", 00:28:35.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.879 "hdgst": ${hdgst:-false}, 00:28:35.879 "ddgst": ${ddgst:-false} 00:28:35.879 }, 00:28:35.879 "method": "bdev_nvme_attach_controller" 00:28:35.879 } 00:28:35.879 EOF 00:28:35.879 )") 00:28:35.879 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:35.879 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:35.879 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:35.879 18:44:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:35.879 "params": { 00:28:35.879 "name": "Nvme1", 00:28:35.879 "trtype": "tcp", 00:28:35.879 "traddr": "10.0.0.2", 00:28:35.879 "adrfam": "ipv4", 00:28:35.879 "trsvcid": "4420", 00:28:35.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:35.879 "hdgst": false, 00:28:35.879 "ddgst": false 00:28:35.879 }, 00:28:35.879 "method": "bdev_nvme_attach_controller" 00:28:35.879 }' 00:28:35.879 [2024-10-08 18:44:29.707858] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:28:35.879 [2024-10-08 18:44:29.707917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405140 ] 00:28:35.879 [2024-10-08 18:44:29.785000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.879 [2024-10-08 18:44:29.849369] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.139 Running I/O for 15 seconds... 00:28:38.016 10759.00 IOPS, 42.03 MiB/s [2024-10-08T16:44:33.014Z] 11037.00 IOPS, 43.11 MiB/s [2024-10-08T16:44:33.014Z] 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1404620 00:28:38.957 18:44:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:38.957 [2024-10-08 18:44:32.672072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.957 [2024-10-08 18:44:32.672624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.957 [2024-10-08 18:44:32.672641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.957 [2024-10-08 18:44:32.672658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.957 [2024-10-08 18:44:32.672674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.957 [2024-10-08 18:44:32.672691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.957 [2024-10-08 18:44:32.672707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.957 [2024-10-08 18:44:32.672724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.957 [2024-10-08 18:44:32.672742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.957 [2024-10-08 18:44:32.672759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.957 [2024-10-08 18:44:32.672768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.672988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.672997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.958 [2024-10-08 18:44:32.673474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.958 [2024-10-08 18:44:32.673484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.673988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.673996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.674006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.674014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.674023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.674030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.674039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.674047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.674056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.674063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.674073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.674080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.674089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.959 [2024-10-08 18:44:32.674099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.959 [2024-10-08 18:44:32.674108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.960 [2024-10-08 18:44:32.674116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.960 [2024-10-08 18:44:32.674132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.960 [2024-10-08 18:44:32.674148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.960 [2024-10-08 18:44:32.674165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.960 [2024-10-08 18:44:32.674181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.960 [2024-10-08 18:44:32.674198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.960 [2024-10-08 18:44:32.674216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-10-08 18:44:32.674233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-10-08 18:44:32.674249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-10-08 18:44:32.674266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-10-08 18:44:32.674283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-10-08 18:44:32.674299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-10-08 18:44:32.674316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fe600 is same with the state(6) to be set 00:28:38.960 [2024-10-08 18:44:32.674335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.960 [2024-10-08 18:44:32.674341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.960 [2024-10-08 18:44:32.674347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105256 len:8 PRP1 0x0 PRP2 0x0 00:28:38.960 [2024-10-08 18:44:32.674355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674392] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11fe600 was disconnected and freed. reset controller. 00:28:38.960 [2024-10-08 18:44:32.674436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.960 [2024-10-08 18:44:32.674447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.960 [2024-10-08 18:44:32.674463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.960 [2024-10-08 18:44:32.674479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.960 [2024-10-08 18:44:32.674496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.960 [2024-10-08 18:44:32.674504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.960 [2024-10-08 18:44:32.677997] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.960 [2024-10-08 18:44:32.678017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.960 [2024-10-08 18:44:32.678759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-10-08 18:44:32.678776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.960 [2024-10-08 18:44:32.678785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.960 [2024-10-08 18:44:32.679007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.960 [2024-10-08 18:44:32.679224] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.960 [2024-10-08 18:44:32.679232] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.960 [2024-10-08 18:44:32.679241] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.960 [2024-10-08 18:44:32.682723] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.960 [2024-10-08 18:44:32.691968] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.960 [2024-10-08 18:44:32.692502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-10-08 18:44:32.692542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.960 [2024-10-08 18:44:32.692554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.960 [2024-10-08 18:44:32.692792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.960 [2024-10-08 18:44:32.693020] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.960 [2024-10-08 18:44:32.693030] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.960 [2024-10-08 18:44:32.693039] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.960 [2024-10-08 18:44:32.696530] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.960 [2024-10-08 18:44:32.705779] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.960 [2024-10-08 18:44:32.706371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-10-08 18:44:32.706390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.960 [2024-10-08 18:44:32.706399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.960 [2024-10-08 18:44:32.706615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.960 [2024-10-08 18:44:32.706830] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.960 [2024-10-08 18:44:32.706839] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.960 [2024-10-08 18:44:32.706846] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.960 [2024-10-08 18:44:32.710347] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.960 [2024-10-08 18:44:32.719606] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.960 [2024-10-08 18:44:32.720237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-10-08 18:44:32.720277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.960 [2024-10-08 18:44:32.720289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.960 [2024-10-08 18:44:32.720525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.960 [2024-10-08 18:44:32.720745] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.960 [2024-10-08 18:44:32.720754] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.960 [2024-10-08 18:44:32.720762] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.960 [2024-10-08 18:44:32.724254] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.960 [2024-10-08 18:44:32.733498] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.960 [2024-10-08 18:44:32.734091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-10-08 18:44:32.734133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.960 [2024-10-08 18:44:32.734146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.960 [2024-10-08 18:44:32.734388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.960 [2024-10-08 18:44:32.734607] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.960 [2024-10-08 18:44:32.734617] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.960 [2024-10-08 18:44:32.734625] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.960 [2024-10-08 18:44:32.738131] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.960 [2024-10-08 18:44:32.747376] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.960 [2024-10-08 18:44:32.748012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.960 [2024-10-08 18:44:32.748055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.961 [2024-10-08 18:44:32.748066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.961 [2024-10-08 18:44:32.748305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.961 [2024-10-08 18:44:32.748525] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.961 [2024-10-08 18:44:32.748533] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.961 [2024-10-08 18:44:32.748542] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.961 [2024-10-08 18:44:32.752045] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.961 [2024-10-08 18:44:32.761293] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.961 [2024-10-08 18:44:32.761966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-10-08 18:44:32.762018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.961 [2024-10-08 18:44:32.762039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.961 [2024-10-08 18:44:32.762279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.961 [2024-10-08 18:44:32.762499] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.961 [2024-10-08 18:44:32.762514] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.961 [2024-10-08 18:44:32.762522] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.961 [2024-10-08 18:44:32.766024] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.961 [2024-10-08 18:44:32.775065] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.961 [2024-10-08 18:44:32.775708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-10-08 18:44:32.775755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.961 [2024-10-08 18:44:32.775767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.961 [2024-10-08 18:44:32.776023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.961 [2024-10-08 18:44:32.776244] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.961 [2024-10-08 18:44:32.776253] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.961 [2024-10-08 18:44:32.776261] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.961 [2024-10-08 18:44:32.779756] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.961 [2024-10-08 18:44:32.788806] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.961 [2024-10-08 18:44:32.789489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-10-08 18:44:32.789540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.961 [2024-10-08 18:44:32.789552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.961 [2024-10-08 18:44:32.789795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.961 [2024-10-08 18:44:32.790025] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.961 [2024-10-08 18:44:32.790035] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.961 [2024-10-08 18:44:32.790043] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.961 [2024-10-08 18:44:32.793548] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.961 [2024-10-08 18:44:32.802598] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.961 [2024-10-08 18:44:32.803325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-10-08 18:44:32.803377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.961 [2024-10-08 18:44:32.803389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.961 [2024-10-08 18:44:32.803633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.961 [2024-10-08 18:44:32.803854] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.961 [2024-10-08 18:44:32.803870] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.961 [2024-10-08 18:44:32.803878] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.961 [2024-10-08 18:44:32.807393] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.961 [2024-10-08 18:44:32.816479] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.961 [2024-10-08 18:44:32.817119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-10-08 18:44:32.817177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.961 [2024-10-08 18:44:32.817190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.961 [2024-10-08 18:44:32.817439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.961 [2024-10-08 18:44:32.817662] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.961 [2024-10-08 18:44:32.817671] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.961 [2024-10-08 18:44:32.817679] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.961 [2024-10-08 18:44:32.821203] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.961 [2024-10-08 18:44:32.830259] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.961 [2024-10-08 18:44:32.830965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-10-08 18:44:32.831039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.961 [2024-10-08 18:44:32.831053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.961 [2024-10-08 18:44:32.831305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.961 [2024-10-08 18:44:32.831528] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.961 [2024-10-08 18:44:32.831537] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.961 [2024-10-08 18:44:32.831545] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.961 [2024-10-08 18:44:32.835068] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.961 [2024-10-08 18:44:32.844123] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.961 [2024-10-08 18:44:32.844799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-10-08 18:44:32.844863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.961 [2024-10-08 18:44:32.844876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.961 [2024-10-08 18:44:32.845142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.961 [2024-10-08 18:44:32.845367] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.961 [2024-10-08 18:44:32.845377] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.961 [2024-10-08 18:44:32.845385] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.961 [2024-10-08 18:44:32.848897] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.961 [2024-10-08 18:44:32.857967] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.961 [2024-10-08 18:44:32.858620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.961 [2024-10-08 18:44:32.858683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.961 [2024-10-08 18:44:32.858696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.961 [2024-10-08 18:44:32.858947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.961 [2024-10-08 18:44:32.859191] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.961 [2024-10-08 18:44:32.859204] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.961 [2024-10-08 18:44:32.859212] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.961 [2024-10-08 18:44:32.862727] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.961 [2024-10-08 18:44:32.871795] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.961 [2024-10-08 18:44:32.872393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-10-08 18:44:32.872422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.962 [2024-10-08 18:44:32.872431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.962 [2024-10-08 18:44:32.872651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.962 [2024-10-08 18:44:32.872869] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.962 [2024-10-08 18:44:32.872879] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.962 [2024-10-08 18:44:32.872887] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.962 [2024-10-08 18:44:32.876399] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.962 [2024-10-08 18:44:32.885657] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.962 [2024-10-08 18:44:32.886250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-10-08 18:44:32.886312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.962 [2024-10-08 18:44:32.886325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.962 [2024-10-08 18:44:32.886577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.962 [2024-10-08 18:44:32.886800] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.962 [2024-10-08 18:44:32.886809] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.962 [2024-10-08 18:44:32.886818] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.962 [2024-10-08 18:44:32.890341] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.962 [2024-10-08 18:44:32.899607] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.962 [2024-10-08 18:44:32.900191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-10-08 18:44:32.900255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.962 [2024-10-08 18:44:32.900270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.962 [2024-10-08 18:44:32.900530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.962 [2024-10-08 18:44:32.900753] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.962 [2024-10-08 18:44:32.900764] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.962 [2024-10-08 18:44:32.900772] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.962 [2024-10-08 18:44:32.904295] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.962 [2024-10-08 18:44:32.913383] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.962 [2024-10-08 18:44:32.914095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-10-08 18:44:32.914159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.962 [2024-10-08 18:44:32.914172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.962 [2024-10-08 18:44:32.914424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.962 [2024-10-08 18:44:32.914648] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.962 [2024-10-08 18:44:32.914657] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.962 [2024-10-08 18:44:32.914665] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.962 [2024-10-08 18:44:32.918195] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.962 [2024-10-08 18:44:32.927254] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.962 [2024-10-08 18:44:32.927853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-10-08 18:44:32.927917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.962 [2024-10-08 18:44:32.927930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.962 [2024-10-08 18:44:32.928194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.962 [2024-10-08 18:44:32.928419] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.962 [2024-10-08 18:44:32.928428] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.962 [2024-10-08 18:44:32.928436] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.962 [2024-10-08 18:44:32.931947] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.962 [2024-10-08 18:44:32.941026] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.962 [2024-10-08 18:44:32.941623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-10-08 18:44:32.941650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.962 [2024-10-08 18:44:32.941659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.962 [2024-10-08 18:44:32.941879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.962 [2024-10-08 18:44:32.942107] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.962 [2024-10-08 18:44:32.942116] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.962 [2024-10-08 18:44:32.942133] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.962 [2024-10-08 18:44:32.945636] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.962 [2024-10-08 18:44:32.954899] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.962 [2024-10-08 18:44:32.955637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-10-08 18:44:32.955700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.962 [2024-10-08 18:44:32.955713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.962 [2024-10-08 18:44:32.955966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.962 [2024-10-08 18:44:32.956200] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.962 [2024-10-08 18:44:32.956210] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.962 [2024-10-08 18:44:32.956218] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.962 [2024-10-08 18:44:32.959733] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.962 [2024-10-08 18:44:32.968791] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.962 [2024-10-08 18:44:32.969460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-10-08 18:44:32.969522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.962 [2024-10-08 18:44:32.969535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.962 [2024-10-08 18:44:32.969788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.962 [2024-10-08 18:44:32.970022] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.962 [2024-10-08 18:44:32.970033] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.962 [2024-10-08 18:44:32.970041] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.962 [2024-10-08 18:44:32.973554] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.962 [2024-10-08 18:44:32.982711] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.962 [2024-10-08 18:44:32.983469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-10-08 18:44:32.983532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.962 [2024-10-08 18:44:32.983545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.962 [2024-10-08 18:44:32.983797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.962 [2024-10-08 18:44:32.984033] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.962 [2024-10-08 18:44:32.984043] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.962 [2024-10-08 18:44:32.984052] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.962 [2024-10-08 18:44:32.987563] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.962 [2024-10-08 18:44:32.996625] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.962 [2024-10-08 18:44:32.997316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.962 [2024-10-08 18:44:32.997387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:38.962 [2024-10-08 18:44:32.997400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:38.962 [2024-10-08 18:44:32.997652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:38.962 [2024-10-08 18:44:32.997875] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.962 [2024-10-08 18:44:32.997884] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.962 [2024-10-08 18:44:32.997892] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.962 [2024-10-08 18:44:33.001416] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.224 9836.00 IOPS, 38.42 MiB/s [2024-10-08T16:44:33.281Z] [2024-10-08 18:44:33.011544] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.224 [2024-10-08 18:44:33.012291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.224 [2024-10-08 18:44:33.012353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.224 [2024-10-08 18:44:33.012366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.224 [2024-10-08 18:44:33.012618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.224 [2024-10-08 18:44:33.012841] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.224 [2024-10-08 18:44:33.012850] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.224 [2024-10-08 18:44:33.012859] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.224 [2024-10-08 18:44:33.016399] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.224 [2024-10-08 18:44:33.025458] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.224 [2024-10-08 18:44:33.026097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.224 [2024-10-08 18:44:33.026161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.224 [2024-10-08 18:44:33.026176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.224 [2024-10-08 18:44:33.026429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.224 [2024-10-08 18:44:33.026653] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.224 [2024-10-08 18:44:33.026662] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.224 [2024-10-08 18:44:33.026670] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.224 [2024-10-08 18:44:33.030200] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.224 [2024-10-08 18:44:33.039263] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.224 [2024-10-08 18:44:33.039984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.040048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.040061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.040314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.040544] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.040555] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.225 [2024-10-08 18:44:33.040563] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.225 [2024-10-08 18:44:33.044079] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.225 [2024-10-08 18:44:33.053138] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.225 [2024-10-08 18:44:33.053860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.053923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.053937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.054203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.054427] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.054436] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.225 [2024-10-08 18:44:33.054445] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.225 [2024-10-08 18:44:33.057955] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.225 [2024-10-08 18:44:33.067019] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.225 [2024-10-08 18:44:33.067736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.067798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.067810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.068076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.068301] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.068311] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.225 [2024-10-08 18:44:33.068319] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.225 [2024-10-08 18:44:33.071827] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.225 [2024-10-08 18:44:33.080885] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.225 [2024-10-08 18:44:33.081563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.081625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.081638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.081890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.082127] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.082138] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.225 [2024-10-08 18:44:33.082147] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.225 [2024-10-08 18:44:33.085665] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.225 [2024-10-08 18:44:33.094720] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.225 [2024-10-08 18:44:33.095417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.095480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.095494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.095746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.095969] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.095996] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.225 [2024-10-08 18:44:33.096004] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.225 [2024-10-08 18:44:33.099519] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.225 [2024-10-08 18:44:33.108580] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.225 [2024-10-08 18:44:33.109143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.109207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.109221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.109475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.109698] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.109708] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.225 [2024-10-08 18:44:33.109717] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.225 [2024-10-08 18:44:33.113254] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.225 [2024-10-08 18:44:33.122339] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.225 [2024-10-08 18:44:33.123053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.123117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.123130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.123383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.123606] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.123615] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.225 [2024-10-08 18:44:33.123623] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.225 [2024-10-08 18:44:33.127147] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.225 [2024-10-08 18:44:33.136212] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.225 [2024-10-08 18:44:33.136936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.137012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.137033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.137285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.137509] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.137518] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.225 [2024-10-08 18:44:33.137526] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.225 [2024-10-08 18:44:33.141054] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.225 [2024-10-08 18:44:33.150126] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.225 [2024-10-08 18:44:33.150835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.150897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.150910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.151180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.151403] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.151413] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.225 [2024-10-08 18:44:33.151421] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.225 [2024-10-08 18:44:33.154930] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.225 [2024-10-08 18:44:33.163996] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.225 [2024-10-08 18:44:33.164679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.164741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.164754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.165023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.165247] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.165257] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.225 [2024-10-08 18:44:33.165265] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.225 [2024-10-08 18:44:33.168774] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.225 [2024-10-08 18:44:33.177846] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.225 [2024-10-08 18:44:33.178505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.225 [2024-10-08 18:44:33.178569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.225 [2024-10-08 18:44:33.178582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.225 [2024-10-08 18:44:33.178835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.225 [2024-10-08 18:44:33.179075] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.225 [2024-10-08 18:44:33.179092] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.226 [2024-10-08 18:44:33.179101] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.226 [2024-10-08 18:44:33.182620] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.226 [2024-10-08 18:44:33.191694] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.226 [2024-10-08 18:44:33.192372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.226 [2024-10-08 18:44:33.192435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.226 [2024-10-08 18:44:33.192448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.226 [2024-10-08 18:44:33.192700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.226 [2024-10-08 18:44:33.192923] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.226 [2024-10-08 18:44:33.192932] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.226 [2024-10-08 18:44:33.192940] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.226 [2024-10-08 18:44:33.196464] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.226 [2024-10-08 18:44:33.205514] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.226 [2024-10-08 18:44:33.206256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.226 [2024-10-08 18:44:33.206318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.226 [2024-10-08 18:44:33.206331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.226 [2024-10-08 18:44:33.206583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.226 [2024-10-08 18:44:33.206806] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.226 [2024-10-08 18:44:33.206815] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.226 [2024-10-08 18:44:33.206823] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.226 [2024-10-08 18:44:33.210345] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.226 [2024-10-08 18:44:33.219437] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.226 [2024-10-08 18:44:33.220155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.226 [2024-10-08 18:44:33.220218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.226 [2024-10-08 18:44:33.220231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.226 [2024-10-08 18:44:33.220483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.226 [2024-10-08 18:44:33.220706] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.226 [2024-10-08 18:44:33.220715] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.226 [2024-10-08 18:44:33.220724] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.226 [2024-10-08 18:44:33.224257] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.226 [2024-10-08 18:44:33.233319] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.226 [2024-10-08 18:44:33.233954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.226 [2024-10-08 18:44:33.233991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.226 [2024-10-08 18:44:33.234001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.226 [2024-10-08 18:44:33.234221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.226 [2024-10-08 18:44:33.234439] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.226 [2024-10-08 18:44:33.234449] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.226 [2024-10-08 18:44:33.234457] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.226 [2024-10-08 18:44:33.237950] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.226 [2024-10-08 18:44:33.247198] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.226 [2024-10-08 18:44:33.247752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.226 [2024-10-08 18:44:33.247773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.226 [2024-10-08 18:44:33.247782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.226 [2024-10-08 18:44:33.248010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.226 [2024-10-08 18:44:33.248228] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.226 [2024-10-08 18:44:33.248237] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.226 [2024-10-08 18:44:33.248245] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.226 [2024-10-08 18:44:33.251739] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.226 [2024-10-08 18:44:33.260984] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.226 [2024-10-08 18:44:33.261537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.226 [2024-10-08 18:44:33.261558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.226 [2024-10-08 18:44:33.261567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.226 [2024-10-08 18:44:33.261784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.226 [2024-10-08 18:44:33.262009] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.226 [2024-10-08 18:44:33.262020] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.226 [2024-10-08 18:44:33.262028] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.226 [2024-10-08 18:44:33.265523] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.226 [2024-10-08 18:44:33.274792] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.226 [2024-10-08 18:44:33.275482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.226 [2024-10-08 18:44:33.275545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.226 [2024-10-08 18:44:33.275566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.226 [2024-10-08 18:44:33.275820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.226 [2024-10-08 18:44:33.276060] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.226 [2024-10-08 18:44:33.276070] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.226 [2024-10-08 18:44:33.276079] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.226 [2024-10-08 18:44:33.279597] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.488 [2024-10-08 18:44:33.288670] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.488 [2024-10-08 18:44:33.289227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.488 [2024-10-08 18:44:33.289256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.488 [2024-10-08 18:44:33.289266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.488 [2024-10-08 18:44:33.289487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.488 [2024-10-08 18:44:33.289705] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.488 [2024-10-08 18:44:33.289714] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.488 [2024-10-08 18:44:33.289723] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.488 [2024-10-08 18:44:33.293229] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.488 [2024-10-08 18:44:33.302557] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.488 [2024-10-08 18:44:33.303055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.488 [2024-10-08 18:44:33.303078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.488 [2024-10-08 18:44:33.303087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.488 [2024-10-08 18:44:33.303306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.488 [2024-10-08 18:44:33.303523] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.488 [2024-10-08 18:44:33.303534] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.488 [2024-10-08 18:44:33.303542] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.488 [2024-10-08 18:44:33.307049] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.488 [2024-10-08 18:44:33.316344] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.488 [2024-10-08 18:44:33.317028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.488 [2024-10-08 18:44:33.317091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.488 [2024-10-08 18:44:33.317105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.488 [2024-10-08 18:44:33.317357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.488 [2024-10-08 18:44:33.317579] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.488 [2024-10-08 18:44:33.317597] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.488 [2024-10-08 18:44:33.317605] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.488 [2024-10-08 18:44:33.321125] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.488 [2024-10-08 18:44:33.330191] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.488 [2024-10-08 18:44:33.330904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.488 [2024-10-08 18:44:33.330966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.488 [2024-10-08 18:44:33.330992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.488 [2024-10-08 18:44:33.331245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.488 [2024-10-08 18:44:33.331468] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.331478] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.331486] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.335005] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.344056] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.489 [2024-10-08 18:44:33.344733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.489 [2024-10-08 18:44:33.344795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.489 [2024-10-08 18:44:33.344808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.489 [2024-10-08 18:44:33.345076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.489 [2024-10-08 18:44:33.345300] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.345309] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.345317] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.348828] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.357900] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.489 [2024-10-08 18:44:33.358568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.489 [2024-10-08 18:44:33.358631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.489 [2024-10-08 18:44:33.358644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.489 [2024-10-08 18:44:33.358896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.489 [2024-10-08 18:44:33.359132] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.359143] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.359151] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.362669] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.371659] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.489 [2024-10-08 18:44:33.372363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.489 [2024-10-08 18:44:33.372425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.489 [2024-10-08 18:44:33.372438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.489 [2024-10-08 18:44:33.372690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.489 [2024-10-08 18:44:33.372914] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.372923] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.372931] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.376465] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.385533] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.489 [2024-10-08 18:44:33.386210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.489 [2024-10-08 18:44:33.386273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.489 [2024-10-08 18:44:33.386286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.489 [2024-10-08 18:44:33.386538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.489 [2024-10-08 18:44:33.386762] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.386771] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.386779] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.390308] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.399358] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.489 [2024-10-08 18:44:33.400073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.489 [2024-10-08 18:44:33.400135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.489 [2024-10-08 18:44:33.400148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.489 [2024-10-08 18:44:33.400400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.489 [2024-10-08 18:44:33.400623] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.400632] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.400641] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.404167] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.413244] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.489 [2024-10-08 18:44:33.413968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.489 [2024-10-08 18:44:33.414053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.489 [2024-10-08 18:44:33.414067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.489 [2024-10-08 18:44:33.414327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.489 [2024-10-08 18:44:33.414551] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.414560] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.414568] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.418082] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.427150] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.489 [2024-10-08 18:44:33.427786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.489 [2024-10-08 18:44:33.427813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.489 [2024-10-08 18:44:33.427822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.489 [2024-10-08 18:44:33.428052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.489 [2024-10-08 18:44:33.428271] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.428280] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.428288] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.431790] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.441070] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.489 [2024-10-08 18:44:33.441673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.489 [2024-10-08 18:44:33.441696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.489 [2024-10-08 18:44:33.441704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.489 [2024-10-08 18:44:33.441922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.489 [2024-10-08 18:44:33.442148] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.442158] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.442166] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.445658] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.454905] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.489 [2024-10-08 18:44:33.455458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.489 [2024-10-08 18:44:33.455481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.489 [2024-10-08 18:44:33.455489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.489 [2024-10-08 18:44:33.455707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.489 [2024-10-08 18:44:33.455924] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.455934] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.455956] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.459459] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.468702] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.489 [2024-10-08 18:44:33.469268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.489 [2024-10-08 18:44:33.469327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.489 [2024-10-08 18:44:33.469339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.489 [2024-10-08 18:44:33.469590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.489 [2024-10-08 18:44:33.469813] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.489 [2024-10-08 18:44:33.469822] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.489 [2024-10-08 18:44:33.469831] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.489 [2024-10-08 18:44:33.473359] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.489 [2024-10-08 18:44:33.482613] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.490 [2024-10-08 18:44:33.483328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.490 [2024-10-08 18:44:33.483392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.490 [2024-10-08 18:44:33.483405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.490 [2024-10-08 18:44:33.483657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.490 [2024-10-08 18:44:33.483879] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.490 [2024-10-08 18:44:33.483890] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.490 [2024-10-08 18:44:33.483898] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.490 [2024-10-08 18:44:33.487437] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.490 [2024-10-08 18:44:33.496763] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.490 [2024-10-08 18:44:33.497465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.490 [2024-10-08 18:44:33.497527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.490 [2024-10-08 18:44:33.497540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.490 [2024-10-08 18:44:33.497793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.490 [2024-10-08 18:44:33.498030] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.490 [2024-10-08 18:44:33.498040] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.490 [2024-10-08 18:44:33.498048] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.490 [2024-10-08 18:44:33.501559] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.490 [2024-10-08 18:44:33.510615] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.490 [2024-10-08 18:44:33.511228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.490 [2024-10-08 18:44:33.511298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.490 [2024-10-08 18:44:33.511312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.490 [2024-10-08 18:44:33.511564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.490 [2024-10-08 18:44:33.511788] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.490 [2024-10-08 18:44:33.511797] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.490 [2024-10-08 18:44:33.511805] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.490 [2024-10-08 18:44:33.515363] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.490 [2024-10-08 18:44:33.524422] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.490 [2024-10-08 18:44:33.525116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.490 [2024-10-08 18:44:33.525178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.490 [2024-10-08 18:44:33.525191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.490 [2024-10-08 18:44:33.525443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.490 [2024-10-08 18:44:33.525666] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.490 [2024-10-08 18:44:33.525677] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.490 [2024-10-08 18:44:33.525685] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.490 [2024-10-08 18:44:33.529207] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.490 [2024-10-08 18:44:33.538268] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.490 [2024-10-08 18:44:33.539024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.490 [2024-10-08 18:44:33.539087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.490 [2024-10-08 18:44:33.539100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.490 [2024-10-08 18:44:33.539352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.490 [2024-10-08 18:44:33.539575] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.490 [2024-10-08 18:44:33.539584] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.490 [2024-10-08 18:44:33.539593] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.490 [2024-10-08 18:44:33.543130] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.752 [2024-10-08 18:44:33.552214] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.752 [2024-10-08 18:44:33.552918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.752 [2024-10-08 18:44:33.552991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.752 [2024-10-08 18:44:33.553006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.752 [2024-10-08 18:44:33.553258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.752 [2024-10-08 18:44:33.553489] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.752 [2024-10-08 18:44:33.553498] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.752 [2024-10-08 18:44:33.553507] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.752 [2024-10-08 18:44:33.557020] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.752 [2024-10-08 18:44:33.566072] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.752 [2024-10-08 18:44:33.566682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.752 [2024-10-08 18:44:33.566745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.752 [2024-10-08 18:44:33.566758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.752 [2024-10-08 18:44:33.567025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.752 [2024-10-08 18:44:33.567250] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.752 [2024-10-08 18:44:33.567260] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.752 [2024-10-08 18:44:33.567268] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.752 [2024-10-08 18:44:33.570775] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.752 [2024-10-08 18:44:33.579831] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.752 [2024-10-08 18:44:33.580466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.752 [2024-10-08 18:44:33.580495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.752 [2024-10-08 18:44:33.580504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.752 [2024-10-08 18:44:33.580725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.752 [2024-10-08 18:44:33.580943] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.752 [2024-10-08 18:44:33.580953] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.752 [2024-10-08 18:44:33.580960] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.752 [2024-10-08 18:44:33.584466] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.752 [2024-10-08 18:44:33.593716] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.752 [2024-10-08 18:44:33.594277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.752 [2024-10-08 18:44:33.594299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.752 [2024-10-08 18:44:33.594308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.752 [2024-10-08 18:44:33.594526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.752 [2024-10-08 18:44:33.594743] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.752 [2024-10-08 18:44:33.594760] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.752 [2024-10-08 18:44:33.594768] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.752 [2024-10-08 18:44:33.598280] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.752 [2024-10-08 18:44:33.607526] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.752 [2024-10-08 18:44:33.608198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.752 [2024-10-08 18:44:33.608261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.752 [2024-10-08 18:44:33.608273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.752 [2024-10-08 18:44:33.608525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.752 [2024-10-08 18:44:33.608749] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.752 [2024-10-08 18:44:33.608758] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.752 [2024-10-08 18:44:33.608766] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.752 [2024-10-08 18:44:33.612295] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.752 [2024-10-08 18:44:33.621386] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.622090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.622154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.622167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.753 [2024-10-08 18:44:33.622419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.753 [2024-10-08 18:44:33.622642] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.753 [2024-10-08 18:44:33.622652] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.753 [2024-10-08 18:44:33.622660] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.753 [2024-10-08 18:44:33.626183] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.753 [2024-10-08 18:44:33.635239] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.635867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.635894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.635903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.753 [2024-10-08 18:44:33.636133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.753 [2024-10-08 18:44:33.636352] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.753 [2024-10-08 18:44:33.636362] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.753 [2024-10-08 18:44:33.636370] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.753 [2024-10-08 18:44:33.639865] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.753 [2024-10-08 18:44:33.649114] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.649669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.649693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.649709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.753 [2024-10-08 18:44:33.649927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.753 [2024-10-08 18:44:33.650155] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.753 [2024-10-08 18:44:33.650165] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.753 [2024-10-08 18:44:33.650173] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.753 [2024-10-08 18:44:33.653664] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.753 [2024-10-08 18:44:33.662911] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.663561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.663623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.663636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.753 [2024-10-08 18:44:33.663889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.753 [2024-10-08 18:44:33.664128] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.753 [2024-10-08 18:44:33.664139] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.753 [2024-10-08 18:44:33.664148] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.753 [2024-10-08 18:44:33.667653] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.753 [2024-10-08 18:44:33.676712] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.677281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.677309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.677319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.753 [2024-10-08 18:44:33.677539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.753 [2024-10-08 18:44:33.677759] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.753 [2024-10-08 18:44:33.677769] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.753 [2024-10-08 18:44:33.677778] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.753 [2024-10-08 18:44:33.681284] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.753 [2024-10-08 18:44:33.690555] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.691287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.691351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.691364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.753 [2024-10-08 18:44:33.691617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.753 [2024-10-08 18:44:33.691841] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.753 [2024-10-08 18:44:33.691858] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.753 [2024-10-08 18:44:33.691867] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.753 [2024-10-08 18:44:33.695392] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.753 [2024-10-08 18:44:33.704578] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.705286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.705349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.705361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.753 [2024-10-08 18:44:33.705613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.753 [2024-10-08 18:44:33.705837] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.753 [2024-10-08 18:44:33.705847] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.753 [2024-10-08 18:44:33.705855] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.753 [2024-10-08 18:44:33.709374] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.753 [2024-10-08 18:44:33.718493] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.719234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.719298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.719311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.753 [2024-10-08 18:44:33.719563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.753 [2024-10-08 18:44:33.719786] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.753 [2024-10-08 18:44:33.719796] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.753 [2024-10-08 18:44:33.719804] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.753 [2024-10-08 18:44:33.723325] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.753 [2024-10-08 18:44:33.732388] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.732934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.732963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.732972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.753 [2024-10-08 18:44:33.733202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.753 [2024-10-08 18:44:33.733421] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.753 [2024-10-08 18:44:33.733431] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.753 [2024-10-08 18:44:33.733440] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.753 [2024-10-08 18:44:33.736940] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.753 [2024-10-08 18:44:33.746221] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.746871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.746935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.746948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.753 [2024-10-08 18:44:33.747213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.753 [2024-10-08 18:44:33.747437] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.753 [2024-10-08 18:44:33.747446] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.753 [2024-10-08 18:44:33.747455] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.753 [2024-10-08 18:44:33.750981] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.753 [2024-10-08 18:44:33.760079] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.753 [2024-10-08 18:44:33.760678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.753 [2024-10-08 18:44:33.760708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.753 [2024-10-08 18:44:33.760717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.754 [2024-10-08 18:44:33.760937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.754 [2024-10-08 18:44:33.761166] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.754 [2024-10-08 18:44:33.761178] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.754 [2024-10-08 18:44:33.761186] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.754 [2024-10-08 18:44:33.764702] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.754 [2024-10-08 18:44:33.773996] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.754 [2024-10-08 18:44:33.774558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.754 [2024-10-08 18:44:33.774584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.754 [2024-10-08 18:44:33.774593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.754 [2024-10-08 18:44:33.774812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.754 [2024-10-08 18:44:33.775040] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.754 [2024-10-08 18:44:33.775051] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.754 [2024-10-08 18:44:33.775058] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.754 [2024-10-08 18:44:33.778566] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.754 [2024-10-08 18:44:33.787848] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.754 [2024-10-08 18:44:33.788395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.754 [2024-10-08 18:44:33.788420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.754 [2024-10-08 18:44:33.788429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.754 [2024-10-08 18:44:33.788654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.754 [2024-10-08 18:44:33.788872] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.754 [2024-10-08 18:44:33.788883] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.754 [2024-10-08 18:44:33.788890] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.754 [2024-10-08 18:44:33.792414] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.754 [2024-10-08 18:44:33.801708] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.754 [2024-10-08 18:44:33.802277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.754 [2024-10-08 18:44:33.802301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:39.754 [2024-10-08 18:44:33.802309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:39.754 [2024-10-08 18:44:33.802527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:39.754 [2024-10-08 18:44:33.802744] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.754 [2024-10-08 18:44:33.802762] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.754 [2024-10-08 18:44:33.802771] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.754 [2024-10-08 18:44:33.806283] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.066 [2024-10-08 18:44:33.815605] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.066 [2024-10-08 18:44:33.816173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.066 [2024-10-08 18:44:33.816197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.066 [2024-10-08 18:44:33.816206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.066 [2024-10-08 18:44:33.816425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.066 [2024-10-08 18:44:33.816658] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.066 [2024-10-08 18:44:33.816669] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.066 [2024-10-08 18:44:33.816677] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.066 [2024-10-08 18:44:33.820098] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.066 [2024-10-08 18:44:33.828318] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.066 [2024-10-08 18:44:33.828809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.066 [2024-10-08 18:44:33.828865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.066 [2024-10-08 18:44:33.828875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.066 [2024-10-08 18:44:33.829076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.066 [2024-10-08 18:44:33.829232] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.066 [2024-10-08 18:44:33.829240] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.066 [2024-10-08 18:44:33.829253] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.066 [2024-10-08 18:44:33.831676] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.066 [2024-10-08 18:44:33.841024] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.066 [2024-10-08 18:44:33.841618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.066 [2024-10-08 18:44:33.841671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.066 [2024-10-08 18:44:33.841682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.066 [2024-10-08 18:44:33.841863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.066 [2024-10-08 18:44:33.842031] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.066 [2024-10-08 18:44:33.842038] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.066 [2024-10-08 18:44:33.842045] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.066 [2024-10-08 18:44:33.844467] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.066 [2024-10-08 18:44:33.853675] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.066 [2024-10-08 18:44:33.854354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.066 [2024-10-08 18:44:33.854402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.066 [2024-10-08 18:44:33.854411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.066 [2024-10-08 18:44:33.854587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.854741] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.854749] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.854755] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.857179] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.866358] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.866883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.866904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.866911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.867067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.867218] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.867224] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.867230] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.869629] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.878954] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.879477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.879494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.879499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.879649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.879799] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.879807] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.879813] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.882223] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.891563] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.892220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.892260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.892268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.892438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.892590] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.892596] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.892602] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.895012] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.904201] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.904708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.904725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.904731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.904881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.905039] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.905046] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.905051] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.907455] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.916794] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.917264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.917279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.917285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.917439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.917587] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.917594] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.917599] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.920000] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.929448] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.929939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.929952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.929958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.930111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.930259] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.930265] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.930270] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.932665] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.942160] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.942707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.942740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.942749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.942915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.943076] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.943083] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.943088] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.945486] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.954797] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.955259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.955275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.955284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.955433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.955581] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.955586] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.955595] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.957994] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.967443] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.967928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.967941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.967946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.968100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.968248] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.968254] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.968259] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.970651] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.980094] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.980567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.980579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.980585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.980732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.980880] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.980885] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.980890] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.983285] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:33.992722] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:33.993287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:33.993317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:33.993326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:33.993490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:33.993641] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:33.993647] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:33.993652] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:33.996051] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:34.005352] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:34.005810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:34.005829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:34.005835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:34.005990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:34.006139] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:34.006145] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:34.006150] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 7377.00 IOPS, 28.82 MiB/s [2024-10-08T16:44:34.124Z] [2024-10-08 18:44:34.009672] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:34.018013] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:34.018466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:34.018479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:34.018485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:34.018633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:34.018781] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:34.018786] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:34.018791] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:34.021186] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:34.030628] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:34.031246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:34.031277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:34.031285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:34.031449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:34.031600] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:34.031606] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:34.031612] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:34.034015] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:34.043320] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:34.043801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:34.043816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:34.043821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:34.043970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:34.044127] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.067 [2024-10-08 18:44:34.044134] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.067 [2024-10-08 18:44:34.044139] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.067 [2024-10-08 18:44:34.046529] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.067 [2024-10-08 18:44:34.055971] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.067 [2024-10-08 18:44:34.056452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.067 [2024-10-08 18:44:34.056464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.067 [2024-10-08 18:44:34.056469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.067 [2024-10-08 18:44:34.056617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.067 [2024-10-08 18:44:34.056766] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.068 [2024-10-08 18:44:34.056772] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.068 [2024-10-08 18:44:34.056777] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.068 [2024-10-08 18:44:34.059172] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.068 [2024-10-08 18:44:34.068612] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.068 [2024-10-08 18:44:34.068983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.068 [2024-10-08 18:44:34.068996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.068 [2024-10-08 18:44:34.069002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.068 [2024-10-08 18:44:34.069150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.068 [2024-10-08 18:44:34.069297] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.068 [2024-10-08 18:44:34.069303] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.068 [2024-10-08 18:44:34.069308] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.068 [2024-10-08 18:44:34.071697] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.068 [2024-10-08 18:44:34.081292] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.068 [2024-10-08 18:44:34.081822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.068 [2024-10-08 18:44:34.081834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.068 [2024-10-08 18:44:34.081839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.068 [2024-10-08 18:44:34.081992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.068 [2024-10-08 18:44:34.082140] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.068 [2024-10-08 18:44:34.082146] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.068 [2024-10-08 18:44:34.082151] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.068 [2024-10-08 18:44:34.084547] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.357 [2024-10-08 18:44:34.093852] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.357 [2024-10-08 18:44:34.094316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.357 [2024-10-08 18:44:34.094329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.357 [2024-10-08 18:44:34.094334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.357 [2024-10-08 18:44:34.094482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.357 [2024-10-08 18:44:34.094631] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.357 [2024-10-08 18:44:34.094636] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.357 [2024-10-08 18:44:34.094641] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.357 [2024-10-08 18:44:34.097036] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.357 [2024-10-08 18:44:34.106476] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.357 [2024-10-08 18:44:34.106921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.357 [2024-10-08 18:44:34.106933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.357 [2024-10-08 18:44:34.106938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.357 [2024-10-08 18:44:34.107090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.357 [2024-10-08 18:44:34.107238] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.357 [2024-10-08 18:44:34.107244] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.357 [2024-10-08 18:44:34.107249] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.357 [2024-10-08 18:44:34.109638] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.357 [2024-10-08 18:44:34.119103] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.357 [2024-10-08 18:44:34.119639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.357 [2024-10-08 18:44:34.119670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.357 [2024-10-08 18:44:34.119678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.357 [2024-10-08 18:44:34.119842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.357 [2024-10-08 18:44:34.120001] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.357 [2024-10-08 18:44:34.120008] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.357 [2024-10-08 18:44:34.120013] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.357 [2024-10-08 18:44:34.122408] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.357 [2024-10-08 18:44:34.131720] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.357 [2024-10-08 18:44:34.132272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.357 [2024-10-08 18:44:34.132303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.357 [2024-10-08 18:44:34.132319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.357 [2024-10-08 18:44:34.132486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.357 [2024-10-08 18:44:34.132638] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.357 [2024-10-08 18:44:34.132644] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.357 [2024-10-08 18:44:34.132649] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.357 [2024-10-08 18:44:34.135057] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.357 [2024-10-08 18:44:34.144371] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.357 [2024-10-08 18:44:34.144849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.357 [2024-10-08 18:44:34.144864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.357 [2024-10-08 18:44:34.144870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.357 [2024-10-08 18:44:34.145023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.357 [2024-10-08 18:44:34.145172] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.357 [2024-10-08 18:44:34.145179] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.357 [2024-10-08 18:44:34.145184] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.357 [2024-10-08 18:44:34.147575] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.357 [2024-10-08 18:44:34.157022] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.357 [2024-10-08 18:44:34.157488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.357 [2024-10-08 18:44:34.157501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.357 [2024-10-08 18:44:34.157506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.357 [2024-10-08 18:44:34.157654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.357 [2024-10-08 18:44:34.157802] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.357 [2024-10-08 18:44:34.157808] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.357 [2024-10-08 18:44:34.157813] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.357 [2024-10-08 18:44:34.160208] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.357 [2024-10-08 18:44:34.169650] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.357 [2024-10-08 18:44:34.170220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.357 [2024-10-08 18:44:34.170250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.357 [2024-10-08 18:44:34.170259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.357 [2024-10-08 18:44:34.170426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.357 [2024-10-08 18:44:34.170577] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.357 [2024-10-08 18:44:34.170587] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.357 [2024-10-08 18:44:34.170592] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.357 [2024-10-08 18:44:34.172999] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.357 [2024-10-08 18:44:34.182313] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.357 [2024-10-08 18:44:34.182777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.357 [2024-10-08 18:44:34.182792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.357 [2024-10-08 18:44:34.182798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.357 [2024-10-08 18:44:34.182947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.357 [2024-10-08 18:44:34.183101] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.357 [2024-10-08 18:44:34.183107] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.357 [2024-10-08 18:44:34.183112] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.357 [2024-10-08 18:44:34.185504] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.357 [2024-10-08 18:44:34.194946] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.357 [2024-10-08 18:44:34.195282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.195295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.195300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.195449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.195596] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.195602] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.195607] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.198000] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.207581] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.208197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.208228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.208236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.208401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.208552] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.208558] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.208563] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.210958] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.220282] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.220814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.220828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.220834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.220988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.221137] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.221143] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.221148] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.223542] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.232989] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.233525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.233556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.233564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.233729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.233880] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.233886] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.233891] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.236293] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.245588] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.246107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.246137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.246146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.246313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.246464] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.246470] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.246476] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.248875] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.258176] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.258631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.258660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.258672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.258836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.258997] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.259005] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.259010] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.261406] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.270841] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.271493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.271523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.271532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.271696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.271847] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.271853] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.271859] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.274256] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.283411] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.283787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.283802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.283808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.283956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.284108] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.284115] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.284120] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.286507] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.296084] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.296618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.296648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.296657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.296821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.296972] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.296985] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.296995] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.299388] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.308685] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.309109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.309139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.309148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.309315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.309466] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.309472] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.309477] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.311876] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.321333] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.321726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.321741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.321746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.321895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.322048] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.322055] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.322060] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.324449] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.334023] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.334480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.334492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.334497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.334646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.334793] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.334799] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.334804] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.337196] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.346624] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.346939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.346951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.346957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.347107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.347255] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.347261] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.347266] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.349654] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.359225] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.359704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.359715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.359721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.359868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.360020] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.360027] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.360032] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.362419] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.358 [2024-10-08 18:44:34.371850] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.358 [2024-10-08 18:44:34.372330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.358 [2024-10-08 18:44:34.372342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.358 [2024-10-08 18:44:34.372347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.358 [2024-10-08 18:44:34.372495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.358 [2024-10-08 18:44:34.372643] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.358 [2024-10-08 18:44:34.372648] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.358 [2024-10-08 18:44:34.372653] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.358 [2024-10-08 18:44:34.375044] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.359 [2024-10-08 18:44:34.384472] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.359 [2024-10-08 18:44:34.384916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.359 [2024-10-08 18:44:34.384928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.359 [2024-10-08 18:44:34.384933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.359 [2024-10-08 18:44:34.385088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.359 [2024-10-08 18:44:34.385237] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.359 [2024-10-08 18:44:34.385243] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.359 [2024-10-08 18:44:34.385248] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.359 [2024-10-08 18:44:34.387635] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.359 [2024-10-08 18:44:34.397036] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.359 [2024-10-08 18:44:34.397501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.359 [2024-10-08 18:44:34.397514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.359 [2024-10-08 18:44:34.397519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.359 [2024-10-08 18:44:34.397667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.651 [2024-10-08 18:44:34.397817] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.651 [2024-10-08 18:44:34.397824] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.651 [2024-10-08 18:44:34.397829] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.651 [2024-10-08 18:44:34.400225] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.651 [2024-10-08 18:44:34.409661] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.651 [2024-10-08 18:44:34.410134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.651 [2024-10-08 18:44:34.410165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.651 [2024-10-08 18:44:34.410174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.651 [2024-10-08 18:44:34.410341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.651 [2024-10-08 18:44:34.410492] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.651 [2024-10-08 18:44:34.410499] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.651 [2024-10-08 18:44:34.410504] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.651 [2024-10-08 18:44:34.412904] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.651 [2024-10-08 18:44:34.422362] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.651 [2024-10-08 18:44:34.422828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.651 [2024-10-08 18:44:34.422859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.651 [2024-10-08 18:44:34.422869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.651 [2024-10-08 18:44:34.423042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.651 [2024-10-08 18:44:34.423193] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.651 [2024-10-08 18:44:34.423200] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.651 [2024-10-08 18:44:34.423209] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.651 [2024-10-08 18:44:34.425603] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.651 [2024-10-08 18:44:34.435047] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.651 [2024-10-08 18:44:34.435547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.651 [2024-10-08 18:44:34.435578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.651 [2024-10-08 18:44:34.435587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.651 [2024-10-08 18:44:34.435753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.651 [2024-10-08 18:44:34.435904] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.651 [2024-10-08 18:44:34.435910] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.651 [2024-10-08 18:44:34.435916] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.651 [2024-10-08 18:44:34.438316] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.651 [2024-10-08 18:44:34.447615] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.651 [2024-10-08 18:44:34.448003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.651 [2024-10-08 18:44:34.448019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.651 [2024-10-08 18:44:34.448024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.651 [2024-10-08 18:44:34.448173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.651 [2024-10-08 18:44:34.448321] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.651 [2024-10-08 18:44:34.448327] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.651 [2024-10-08 18:44:34.448332] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.651 [2024-10-08 18:44:34.450723] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.651 [2024-10-08 18:44:34.460302] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.651 [2024-10-08 18:44:34.460785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.651 [2024-10-08 18:44:34.460797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.651 [2024-10-08 18:44:34.460802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.651 [2024-10-08 18:44:34.460950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.651 [2024-10-08 18:44:34.461103] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.651 [2024-10-08 18:44:34.461109] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.651 [2024-10-08 18:44:34.461114] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.651 [2024-10-08 18:44:34.463504] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.651 [2024-10-08 18:44:34.472936] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.651 [2024-10-08 18:44:34.473509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.651 [2024-10-08 18:44:34.473543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.651 [2024-10-08 18:44:34.473552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.651 [2024-10-08 18:44:34.473716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.651 [2024-10-08 18:44:34.473867] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.651 [2024-10-08 18:44:34.473873] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.651 [2024-10-08 18:44:34.473878] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.476280] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.485578] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.486068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.486099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.486108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.652 [2024-10-08 18:44:34.486275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.652 [2024-10-08 18:44:34.486425] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.652 [2024-10-08 18:44:34.486431] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.652 [2024-10-08 18:44:34.486437] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.488834] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.498166] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.498657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.498672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.498678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.652 [2024-10-08 18:44:34.498826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.652 [2024-10-08 18:44:34.498980] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.652 [2024-10-08 18:44:34.498986] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.652 [2024-10-08 18:44:34.498991] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.501382] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.510809] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.511395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.511425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.511434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.652 [2024-10-08 18:44:34.511598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.652 [2024-10-08 18:44:34.511753] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.652 [2024-10-08 18:44:34.511759] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.652 [2024-10-08 18:44:34.511765] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.514166] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.523475] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.524021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.524051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.524060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.652 [2024-10-08 18:44:34.524224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.652 [2024-10-08 18:44:34.524375] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.652 [2024-10-08 18:44:34.524381] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.652 [2024-10-08 18:44:34.524387] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.526788] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.536078] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.536644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.536675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.536683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.652 [2024-10-08 18:44:34.536847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.652 [2024-10-08 18:44:34.537005] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.652 [2024-10-08 18:44:34.537013] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.652 [2024-10-08 18:44:34.537018] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.539412] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.548698] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.549188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.549219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.549227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.652 [2024-10-08 18:44:34.549392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.652 [2024-10-08 18:44:34.549543] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.652 [2024-10-08 18:44:34.549549] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.652 [2024-10-08 18:44:34.549554] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.551956] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.561385] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.561953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.561988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.561998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.652 [2024-10-08 18:44:34.562164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.652 [2024-10-08 18:44:34.562315] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.652 [2024-10-08 18:44:34.562321] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.652 [2024-10-08 18:44:34.562327] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.564725] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.574017] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.574511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.574541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.574550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.652 [2024-10-08 18:44:34.574716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.652 [2024-10-08 18:44:34.574867] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.652 [2024-10-08 18:44:34.574873] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.652 [2024-10-08 18:44:34.574879] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.577281] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.586708] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.587266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.587296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.587305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.652 [2024-10-08 18:44:34.587469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.652 [2024-10-08 18:44:34.587620] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.652 [2024-10-08 18:44:34.587626] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.652 [2024-10-08 18:44:34.587631] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.590031] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.599315] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.599856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.599886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.599897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.652 [2024-10-08 18:44:34.600069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.652 [2024-10-08 18:44:34.600221] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.652 [2024-10-08 18:44:34.600227] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.652 [2024-10-08 18:44:34.600232] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.652 [2024-10-08 18:44:34.602625] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.652 [2024-10-08 18:44:34.611914] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.652 [2024-10-08 18:44:34.612509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.652 [2024-10-08 18:44:34.612539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.652 [2024-10-08 18:44:34.612548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.653 [2024-10-08 18:44:34.612712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.653 [2024-10-08 18:44:34.612863] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.653 [2024-10-08 18:44:34.612869] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.653 [2024-10-08 18:44:34.612874] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.653 [2024-10-08 18:44:34.615272] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.653 [2024-10-08 18:44:34.624579] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.653 [2024-10-08 18:44:34.625075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.653 [2024-10-08 18:44:34.625105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.653 [2024-10-08 18:44:34.625114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.653 [2024-10-08 18:44:34.625279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.653 [2024-10-08 18:44:34.625430] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.653 [2024-10-08 18:44:34.625436] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.653 [2024-10-08 18:44:34.625441] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.653 [2024-10-08 18:44:34.627840] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.653 [2024-10-08 18:44:34.637269] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.653 [2024-10-08 18:44:34.637830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.653 [2024-10-08 18:44:34.637860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.653 [2024-10-08 18:44:34.637869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.653 [2024-10-08 18:44:34.638040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.653 [2024-10-08 18:44:34.638192] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.653 [2024-10-08 18:44:34.638202] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.653 [2024-10-08 18:44:34.638208] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.653 [2024-10-08 18:44:34.640602] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.653 [2024-10-08 18:44:34.649890] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.653 [2024-10-08 18:44:34.650360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.653 [2024-10-08 18:44:34.650391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.653 [2024-10-08 18:44:34.650400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.653 [2024-10-08 18:44:34.650566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.653 [2024-10-08 18:44:34.650716] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.653 [2024-10-08 18:44:34.650723] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.653 [2024-10-08 18:44:34.650728] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.653 [2024-10-08 18:44:34.653129] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.653 [2024-10-08 18:44:34.662564] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.653 [2024-10-08 18:44:34.663174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.653 [2024-10-08 18:44:34.663204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.653 [2024-10-08 18:44:34.663213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.653 [2024-10-08 18:44:34.663377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.653 [2024-10-08 18:44:34.663528] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.653 [2024-10-08 18:44:34.663535] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.653 [2024-10-08 18:44:34.663540] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.653 [2024-10-08 18:44:34.665938] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.653 [2024-10-08 18:44:34.675231] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.653 [2024-10-08 18:44:34.675798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.653 [2024-10-08 18:44:34.675828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.653 [2024-10-08 18:44:34.675837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.653 [2024-10-08 18:44:34.676009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.653 [2024-10-08 18:44:34.676161] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.653 [2024-10-08 18:44:34.676167] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.653 [2024-10-08 18:44:34.676173] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.653 [2024-10-08 18:44:34.678564] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.687864] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.688427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.688458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.688467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.914 [2024-10-08 18:44:34.688631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.914 [2024-10-08 18:44:34.688782] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.914 [2024-10-08 18:44:34.688789] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.914 [2024-10-08 18:44:34.688794] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.914 [2024-10-08 18:44:34.691196] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.700492] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.701020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.701050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.701059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.914 [2024-10-08 18:44:34.701225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.914 [2024-10-08 18:44:34.701376] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.914 [2024-10-08 18:44:34.701382] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.914 [2024-10-08 18:44:34.701388] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.914 [2024-10-08 18:44:34.703786] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.713078] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.713528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.713542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.713547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.914 [2024-10-08 18:44:34.713695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.914 [2024-10-08 18:44:34.713843] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.914 [2024-10-08 18:44:34.713849] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.914 [2024-10-08 18:44:34.713854] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.914 [2024-10-08 18:44:34.716254] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.725697] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.726283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.726314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.726322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.914 [2024-10-08 18:44:34.726490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.914 [2024-10-08 18:44:34.726641] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.914 [2024-10-08 18:44:34.726647] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.914 [2024-10-08 18:44:34.726652] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.914 [2024-10-08 18:44:34.729054] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.738424] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.738962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.738998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.739006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.914 [2024-10-08 18:44:34.739170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.914 [2024-10-08 18:44:34.739321] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.914 [2024-10-08 18:44:34.739327] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.914 [2024-10-08 18:44:34.739332] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.914 [2024-10-08 18:44:34.741729] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.751017] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.751572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.751602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.751611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.914 [2024-10-08 18:44:34.751775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.914 [2024-10-08 18:44:34.751925] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.914 [2024-10-08 18:44:34.751932] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.914 [2024-10-08 18:44:34.751937] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.914 [2024-10-08 18:44:34.754338] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.763626] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.764220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.764250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.764258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.914 [2024-10-08 18:44:34.764422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.914 [2024-10-08 18:44:34.764573] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.914 [2024-10-08 18:44:34.764579] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.914 [2024-10-08 18:44:34.764591] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.914 [2024-10-08 18:44:34.766990] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.776278] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.776861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.776891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.776900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.914 [2024-10-08 18:44:34.777072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.914 [2024-10-08 18:44:34.777224] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.914 [2024-10-08 18:44:34.777230] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.914 [2024-10-08 18:44:34.777235] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.914 [2024-10-08 18:44:34.779629] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.788916] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.789460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.789490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.789499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.914 [2024-10-08 18:44:34.789663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.914 [2024-10-08 18:44:34.789814] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.914 [2024-10-08 18:44:34.789820] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.914 [2024-10-08 18:44:34.789826] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.914 [2024-10-08 18:44:34.792227] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.801514] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.802076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.802107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.802116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.914 [2024-10-08 18:44:34.802282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.914 [2024-10-08 18:44:34.802433] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.914 [2024-10-08 18:44:34.802439] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.914 [2024-10-08 18:44:34.802444] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.914 [2024-10-08 18:44:34.804844] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.914 [2024-10-08 18:44:34.814133] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.914 [2024-10-08 18:44:34.814752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-10-08 18:44:34.814782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.914 [2024-10-08 18:44:34.814791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.814955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.815112] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.815119] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.815125] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.817534] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.826826] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.827400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.827430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.915 [2024-10-08 18:44:34.827439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.827603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.827754] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.827760] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.827765] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.830169] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.839468] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.840120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.840151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.915 [2024-10-08 18:44:34.840160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.840324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.840475] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.840481] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.840486] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.842886] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.852039] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.852619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.852649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.915 [2024-10-08 18:44:34.852657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.852825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.852982] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.852989] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.852995] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.855387] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.864674] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.865272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.865303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.915 [2024-10-08 18:44:34.865312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.865477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.865628] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.865635] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.865640] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.868043] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.877329] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.877894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.877925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.915 [2024-10-08 18:44:34.877934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.878107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.878258] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.878265] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.878271] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.880665] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.889952] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.890423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.890452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.915 [2024-10-08 18:44:34.890461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.890625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.890775] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.890781] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.890791] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.893195] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.902623] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.903091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.903122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.915 [2024-10-08 18:44:34.903131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.903297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.903448] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.903455] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.903460] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.905859] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.915300] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.915938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.915968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.915 [2024-10-08 18:44:34.915983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.916148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.916299] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.916305] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.916310] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.918719] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.927875] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.928262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.928293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.915 [2024-10-08 18:44:34.928302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.928468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.928618] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.928625] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.928630] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.931028] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.940461] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.940905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.940939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.915 [2024-10-08 18:44:34.940948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.915 [2024-10-08 18:44:34.941120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.915 [2024-10-08 18:44:34.941272] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.915 [2024-10-08 18:44:34.941278] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.915 [2024-10-08 18:44:34.941284] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.915 [2024-10-08 18:44:34.943678] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.915 [2024-10-08 18:44:34.953114] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.915 [2024-10-08 18:44:34.953570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-10-08 18:44:34.953584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.916 [2024-10-08 18:44:34.953590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.916 [2024-10-08 18:44:34.953738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.916 [2024-10-08 18:44:34.953886] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.916 [2024-10-08 18:44:34.953892] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.916 [2024-10-08 18:44:34.953897] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.916 [2024-10-08 18:44:34.956290] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.916 [2024-10-08 18:44:34.965712] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.916 [2024-10-08 18:44:34.966273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-10-08 18:44:34.966304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:40.916 [2024-10-08 18:44:34.966312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:40.916 [2024-10-08 18:44:34.966476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:40.916 [2024-10-08 18:44:34.966627] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.916 [2024-10-08 18:44:34.966633] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.916 [2024-10-08 18:44:34.966638] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.916 [2024-10-08 18:44:34.969038] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.176 [2024-10-08 18:44:34.978328] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.176 [2024-10-08 18:44:34.978815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.176 [2024-10-08 18:44:34.978829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.176 [2024-10-08 18:44:34.978835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.176 [2024-10-08 18:44:34.978990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.176 [2024-10-08 18:44:34.979143] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.176 [2024-10-08 18:44:34.979149] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.176 [2024-10-08 18:44:34.979154] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.176 [2024-10-08 18:44:34.981541] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.176 [2024-10-08 18:44:34.990961] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.176 [2024-10-08 18:44:34.991532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.176 [2024-10-08 18:44:34.991562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:34.991571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:34.991735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:34.991886] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:34.991892] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:34.991897] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.177 [2024-10-08 18:44:34.994298] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.177 [2024-10-08 18:44:35.003588] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.177 [2024-10-08 18:44:35.004101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.177 [2024-10-08 18:44:35.004131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:35.004140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:35.004306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:35.004457] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:35.004463] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:35.004469] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.177 [2024-10-08 18:44:35.006868] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.177 5901.60 IOPS, 23.05 MiB/s [2024-10-08T16:44:35.234Z] [2024-10-08 18:44:35.016170] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.177 [2024-10-08 18:44:35.016743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.177 [2024-10-08 18:44:35.016773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:35.016782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:35.016946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:35.017114] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:35.017121] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:35.017126] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.177 [2024-10-08 18:44:35.019534] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.177 [2024-10-08 18:44:35.028824] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.177 [2024-10-08 18:44:35.029367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.177 [2024-10-08 18:44:35.029398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:35.029407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:35.029571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:35.029722] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:35.029728] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:35.029734] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.177 [2024-10-08 18:44:35.032133] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.177 [2024-10-08 18:44:35.041420] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.177 [2024-10-08 18:44:35.041906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.177 [2024-10-08 18:44:35.041921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:35.041926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:35.042080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:35.042229] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:35.042235] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:35.042240] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.177 [2024-10-08 18:44:35.044629] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.177 [2024-10-08 18:44:35.054052] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.177 [2024-10-08 18:44:35.054662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.177 [2024-10-08 18:44:35.054693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:35.054701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:35.054866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:35.055024] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:35.055032] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:35.055037] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.177 [2024-10-08 18:44:35.057431] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.177 [2024-10-08 18:44:35.066718] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.177 [2024-10-08 18:44:35.067267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.177 [2024-10-08 18:44:35.067297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:35.067308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:35.067472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:35.067623] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:35.067629] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:35.067635] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.177 [2024-10-08 18:44:35.070034] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.177 [2024-10-08 18:44:35.079332] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.177 [2024-10-08 18:44:35.079900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.177 [2024-10-08 18:44:35.079931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:35.079940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:35.080112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:35.080263] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:35.080269] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:35.080275] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.177 [2024-10-08 18:44:35.082669] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.177 [2024-10-08 18:44:35.091956] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.177 [2024-10-08 18:44:35.092521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.177 [2024-10-08 18:44:35.092552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:35.092560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:35.092724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:35.092875] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:35.092882] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:35.092887] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.177 [2024-10-08 18:44:35.095287] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.177 [2024-10-08 18:44:35.104578] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.177 [2024-10-08 18:44:35.105065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.177 [2024-10-08 18:44:35.105096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:35.105105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:35.105271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:35.105422] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:35.105432] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:35.105437] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.177 [2024-10-08 18:44:35.107840] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.177 [2024-10-08 18:44:35.117279] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.177 [2024-10-08 18:44:35.117759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.177 [2024-10-08 18:44:35.117789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.177 [2024-10-08 18:44:35.117798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.177 [2024-10-08 18:44:35.117970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.177 [2024-10-08 18:44:35.118129] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.177 [2024-10-08 18:44:35.118136] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.177 [2024-10-08 18:44:35.118141] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.178 [2024-10-08 18:44:35.120534] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.178 [2024-10-08 18:44:35.129962] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.178 [2024-10-08 18:44:35.130540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.178 [2024-10-08 18:44:35.130570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.178 [2024-10-08 18:44:35.130579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.178 [2024-10-08 18:44:35.130744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.178 [2024-10-08 18:44:35.130894] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.178 [2024-10-08 18:44:35.130900] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.178 [2024-10-08 18:44:35.130906] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.178 [2024-10-08 18:44:35.133308] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.178 [2024-10-08 18:44:35.142593] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.178 [2024-10-08 18:44:35.143099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.178 [2024-10-08 18:44:35.143129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.178 [2024-10-08 18:44:35.143138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.178 [2024-10-08 18:44:35.143305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.178 [2024-10-08 18:44:35.143456] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.178 [2024-10-08 18:44:35.143462] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.178 [2024-10-08 18:44:35.143467] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.178 [2024-10-08 18:44:35.145868] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.178 [2024-10-08 18:44:35.155162] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.178 [2024-10-08 18:44:35.155649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.178 [2024-10-08 18:44:35.155663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.178 [2024-10-08 18:44:35.155669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.178 [2024-10-08 18:44:35.155817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.178 [2024-10-08 18:44:35.155965] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.178 [2024-10-08 18:44:35.155971] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.178 [2024-10-08 18:44:35.155982] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.178 [2024-10-08 18:44:35.158371] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.178 [2024-10-08 18:44:35.167795] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.178 [2024-10-08 18:44:35.168136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.178 [2024-10-08 18:44:35.168150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.178 [2024-10-08 18:44:35.168156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.178 [2024-10-08 18:44:35.168304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.178 [2024-10-08 18:44:35.168453] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.178 [2024-10-08 18:44:35.168458] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.178 [2024-10-08 18:44:35.168463] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.178 [2024-10-08 18:44:35.170855] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.178 [2024-10-08 18:44:35.180419] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.178 [2024-10-08 18:44:35.180871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.178 [2024-10-08 18:44:35.180882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.178 [2024-10-08 18:44:35.180888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.178 [2024-10-08 18:44:35.181040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.178 [2024-10-08 18:44:35.181189] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.178 [2024-10-08 18:44:35.181195] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.178 [2024-10-08 18:44:35.181200] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.178 [2024-10-08 18:44:35.183587] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.178 [2024-10-08 18:44:35.193010] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.178 [2024-10-08 18:44:35.193550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.178 [2024-10-08 18:44:35.193580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.178 [2024-10-08 18:44:35.193589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.178 [2024-10-08 18:44:35.193759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.178 [2024-10-08 18:44:35.193910] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.178 [2024-10-08 18:44:35.193916] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.178 [2024-10-08 18:44:35.193922] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.178 [2024-10-08 18:44:35.196323] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.178 [2024-10-08 18:44:35.205618] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.178 [2024-10-08 18:44:35.206153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.178 [2024-10-08 18:44:35.206184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.178 [2024-10-08 18:44:35.206192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.178 [2024-10-08 18:44:35.206359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.178 [2024-10-08 18:44:35.206509] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.178 [2024-10-08 18:44:35.206516] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.178 [2024-10-08 18:44:35.206521] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.178 [2024-10-08 18:44:35.208920] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.178 [2024-10-08 18:44:35.218224] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.178 [2024-10-08 18:44:35.218808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.178 [2024-10-08 18:44:35.218838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.178 [2024-10-08 18:44:35.218847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.178 [2024-10-08 18:44:35.219019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.178 [2024-10-08 18:44:35.219171] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.178 [2024-10-08 18:44:35.219177] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.178 [2024-10-08 18:44:35.219182] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.178 [2024-10-08 18:44:35.221576] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.178 [2024-10-08 18:44:35.230868] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.178 [2024-10-08 18:44:35.231462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.178 [2024-10-08 18:44:35.231492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.178 [2024-10-08 18:44:35.231501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.178 [2024-10-08 18:44:35.231665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.178 [2024-10-08 18:44:35.231816] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.178 [2024-10-08 18:44:35.231822] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.178 [2024-10-08 18:44:35.231831] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.439 [2024-10-08 18:44:35.234230] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.439 [2024-10-08 18:44:35.243524] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.439 [2024-10-08 18:44:35.244088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.439 [2024-10-08 18:44:35.244119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.439 [2024-10-08 18:44:35.244128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.439 [2024-10-08 18:44:35.244293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.439 [2024-10-08 18:44:35.244444] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.439 [2024-10-08 18:44:35.244450] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.439 [2024-10-08 18:44:35.244456] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.439 [2024-10-08 18:44:35.246853] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.439 [2024-10-08 18:44:35.256146] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.439 [2024-10-08 18:44:35.256639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.439 [2024-10-08 18:44:35.256653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.439 [2024-10-08 18:44:35.256659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.439 [2024-10-08 18:44:35.256808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.439 [2024-10-08 18:44:35.256956] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.439 [2024-10-08 18:44:35.256961] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.439 [2024-10-08 18:44:35.256967] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.439 [2024-10-08 18:44:35.259361] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.439 [2024-10-08 18:44:35.268787] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.439 [2024-10-08 18:44:35.269330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.439 [2024-10-08 18:44:35.269361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.439 [2024-10-08 18:44:35.269370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.439 [2024-10-08 18:44:35.269534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.439 [2024-10-08 18:44:35.269685] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.439 [2024-10-08 18:44:35.269691] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.439 [2024-10-08 18:44:35.269696] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.439 [2024-10-08 18:44:35.272097] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.439 [2024-10-08 18:44:35.281387] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.439 [2024-10-08 18:44:35.281875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.439 [2024-10-08 18:44:35.281889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.439 [2024-10-08 18:44:35.281895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.439 [2024-10-08 18:44:35.282049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.439 [2024-10-08 18:44:35.282198] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.439 [2024-10-08 18:44:35.282204] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.439 [2024-10-08 18:44:35.282209] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.439 [2024-10-08 18:44:35.284597] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.439 [2024-10-08 18:44:35.294019] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.439 [2024-10-08 18:44:35.294578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.439 [2024-10-08 18:44:35.294609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.439 [2024-10-08 18:44:35.294617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.439 [2024-10-08 18:44:35.294781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.439 [2024-10-08 18:44:35.294932] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.439 [2024-10-08 18:44:35.294938] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.439 [2024-10-08 18:44:35.294944] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.439 [2024-10-08 18:44:35.297346] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.439 [2024-10-08 18:44:35.306635] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.439 [2024-10-08 18:44:35.307268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.439 [2024-10-08 18:44:35.307298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.439 [2024-10-08 18:44:35.307307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.439 [2024-10-08 18:44:35.307470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.439 [2024-10-08 18:44:35.307621] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.439 [2024-10-08 18:44:35.307627] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.439 [2024-10-08 18:44:35.307633] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.439 [2024-10-08 18:44:35.310033] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.439 [2024-10-08 18:44:35.319336] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.439 [2024-10-08 18:44:35.319901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.439 [2024-10-08 18:44:35.319932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.439 [2024-10-08 18:44:35.319940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.439 [2024-10-08 18:44:35.320117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.439 [2024-10-08 18:44:35.320269] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.439 [2024-10-08 18:44:35.320275] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.439 [2024-10-08 18:44:35.320281] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.439 [2024-10-08 18:44:35.322676] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.439 [2024-10-08 18:44:35.331962] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.439 [2024-10-08 18:44:35.332557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.439 [2024-10-08 18:44:35.332587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.439 [2024-10-08 18:44:35.332596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.439 [2024-10-08 18:44:35.332760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.439 [2024-10-08 18:44:35.332911] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.332917] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.332923] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.335322] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.344610] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.345101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.345132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.440 [2024-10-08 18:44:35.345141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.440 [2024-10-08 18:44:35.345307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.440 [2024-10-08 18:44:35.345458] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.345464] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.345469] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.347868] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.357300] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.357871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.357901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.440 [2024-10-08 18:44:35.357910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.440 [2024-10-08 18:44:35.358082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.440 [2024-10-08 18:44:35.358233] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.358240] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.358248] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.360643] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.369928] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.370530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.370560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.440 [2024-10-08 18:44:35.370569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.440 [2024-10-08 18:44:35.370733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.440 [2024-10-08 18:44:35.370884] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.370890] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.370895] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.373294] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.382579] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.383209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.383239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.440 [2024-10-08 18:44:35.383248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.440 [2024-10-08 18:44:35.383412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.440 [2024-10-08 18:44:35.383563] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.383569] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.383574] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.385970] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.395261] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.395813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.395844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.440 [2024-10-08 18:44:35.395852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.440 [2024-10-08 18:44:35.396024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.440 [2024-10-08 18:44:35.396175] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.396182] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.396187] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.398579] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.407864] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.408418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.408455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.440 [2024-10-08 18:44:35.408464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.440 [2024-10-08 18:44:35.408628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.440 [2024-10-08 18:44:35.408778] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.408785] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.408790] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.411190] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.420446] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.421025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.421055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.440 [2024-10-08 18:44:35.421064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.440 [2024-10-08 18:44:35.421230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.440 [2024-10-08 18:44:35.421381] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.421387] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.421393] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.423792] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.433081] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.433541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.433571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.440 [2024-10-08 18:44:35.433580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.440 [2024-10-08 18:44:35.433744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.440 [2024-10-08 18:44:35.433894] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.433901] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.433906] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.436307] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.445733] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.446238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.446268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.440 [2024-10-08 18:44:35.446277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.440 [2024-10-08 18:44:35.446444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.440 [2024-10-08 18:44:35.446598] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.446604] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.446610] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.449014] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.458310] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.458667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.458685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.440 [2024-10-08 18:44:35.458692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.440 [2024-10-08 18:44:35.458843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.440 [2024-10-08 18:44:35.459000] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.440 [2024-10-08 18:44:35.459008] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.440 [2024-10-08 18:44:35.459013] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.440 [2024-10-08 18:44:35.461403] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.440 [2024-10-08 18:44:35.470972] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.440 [2024-10-08 18:44:35.471466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.440 [2024-10-08 18:44:35.471478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.441 [2024-10-08 18:44:35.471484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.441 [2024-10-08 18:44:35.471632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.441 [2024-10-08 18:44:35.471781] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.441 [2024-10-08 18:44:35.471786] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.441 [2024-10-08 18:44:35.471791] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.441 [2024-10-08 18:44:35.474183] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.441 [2024-10-08 18:44:35.483608] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.441 [2024-10-08 18:44:35.484259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.441 [2024-10-08 18:44:35.484290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.441 [2024-10-08 18:44:35.484299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.441 [2024-10-08 18:44:35.484463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.441 [2024-10-08 18:44:35.484613] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.441 [2024-10-08 18:44:35.484620] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.441 [2024-10-08 18:44:35.484625] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.441 [2024-10-08 18:44:35.487035] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.702 [2024-10-08 18:44:35.496340] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.702 [2024-10-08 18:44:35.496924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.702 [2024-10-08 18:44:35.496955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.702 [2024-10-08 18:44:35.496964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.702 [2024-10-08 18:44:35.497134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.702 [2024-10-08 18:44:35.497285] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.702 [2024-10-08 18:44:35.497291] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.702 [2024-10-08 18:44:35.497297] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.702 [2024-10-08 18:44:35.499691] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.702 [2024-10-08 18:44:35.508988] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.702 [2024-10-08 18:44:35.509473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.702 [2024-10-08 18:44:35.509488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.702 [2024-10-08 18:44:35.509494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.702 [2024-10-08 18:44:35.509642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.702 [2024-10-08 18:44:35.509790] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.702 [2024-10-08 18:44:35.509796] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.702 [2024-10-08 18:44:35.509801] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.702 [2024-10-08 18:44:35.512196] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.702 [2024-10-08 18:44:35.521648] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.702 [2024-10-08 18:44:35.522265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.702 [2024-10-08 18:44:35.522296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.702 [2024-10-08 18:44:35.522305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.702 [2024-10-08 18:44:35.522469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.702 [2024-10-08 18:44:35.522620] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.702 [2024-10-08 18:44:35.522626] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.702 [2024-10-08 18:44:35.522631] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.702 [2024-10-08 18:44:35.525029] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.702 [2024-10-08 18:44:35.534322] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.702 [2024-10-08 18:44:35.534811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.702 [2024-10-08 18:44:35.534826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.702 [2024-10-08 18:44:35.534835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.702 [2024-10-08 18:44:35.534989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.702 [2024-10-08 18:44:35.535138] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.702 [2024-10-08 18:44:35.535143] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.702 [2024-10-08 18:44:35.535148] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.702 [2024-10-08 18:44:35.537537] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.702 [2024-10-08 18:44:35.546968] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.702 [2024-10-08 18:44:35.547300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.702 [2024-10-08 18:44:35.547312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.702 [2024-10-08 18:44:35.547318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.702 [2024-10-08 18:44:35.547466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.702 [2024-10-08 18:44:35.547613] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.702 [2024-10-08 18:44:35.547619] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.702 [2024-10-08 18:44:35.547624] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.702 [2024-10-08 18:44:35.550015] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.702 [2024-10-08 18:44:35.559585] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.702 [2024-10-08 18:44:35.560140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.702 [2024-10-08 18:44:35.560171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.702 [2024-10-08 18:44:35.560180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.702 [2024-10-08 18:44:35.560347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.702 [2024-10-08 18:44:35.560497] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.702 [2024-10-08 18:44:35.560504] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.702 [2024-10-08 18:44:35.560509] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.702 [2024-10-08 18:44:35.562909] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.702 [2024-10-08 18:44:35.572202] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.702 [2024-10-08 18:44:35.572837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.702 [2024-10-08 18:44:35.572868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.702 [2024-10-08 18:44:35.572876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.702 [2024-10-08 18:44:35.573047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.702 [2024-10-08 18:44:35.573199] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.702 [2024-10-08 18:44:35.573209] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.702 [2024-10-08 18:44:35.573214] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.702 [2024-10-08 18:44:35.575608] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.702 [2024-10-08 18:44:35.584761] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.702 [2024-10-08 18:44:35.585420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.702 [2024-10-08 18:44:35.585451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.702 [2024-10-08 18:44:35.585460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.702 [2024-10-08 18:44:35.585624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.702 [2024-10-08 18:44:35.585775] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.702 [2024-10-08 18:44:35.585781] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.702 [2024-10-08 18:44:35.585786] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.702 [2024-10-08 18:44:35.588206] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.702 [2024-10-08 18:44:35.597374] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.702 [2024-10-08 18:44:35.597940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.702 [2024-10-08 18:44:35.597970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.702 [2024-10-08 18:44:35.597987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.702 [2024-10-08 18:44:35.598153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.702 [2024-10-08 18:44:35.598304] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.702 [2024-10-08 18:44:35.598310] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.702 [2024-10-08 18:44:35.598316] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.702 [2024-10-08 18:44:35.600710] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.702 [2024-10-08 18:44:35.610005] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.702 [2024-10-08 18:44:35.610476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.702 [2024-10-08 18:44:35.610506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.702 [2024-10-08 18:44:35.610515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.702 [2024-10-08 18:44:35.610679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.702 [2024-10-08 18:44:35.610830] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.703 [2024-10-08 18:44:35.610836] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.703 [2024-10-08 18:44:35.610841] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.703 [2024-10-08 18:44:35.613241] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.703 [2024-10-08 18:44:35.622695] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.703 [2024-10-08 18:44:35.623285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.703 [2024-10-08 18:44:35.623316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.703 [2024-10-08 18:44:35.623325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.703 [2024-10-08 18:44:35.623492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.703 [2024-10-08 18:44:35.623642] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.703 [2024-10-08 18:44:35.623649] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.703 [2024-10-08 18:44:35.623654] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.703 [2024-10-08 18:44:35.626053] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.703 [2024-10-08 18:44:35.635347] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.703 [2024-10-08 18:44:35.635838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.703 [2024-10-08 18:44:35.635853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.703 [2024-10-08 18:44:35.635858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.703 [2024-10-08 18:44:35.636012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.703 [2024-10-08 18:44:35.636161] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.703 [2024-10-08 18:44:35.636166] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.703 [2024-10-08 18:44:35.636171] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.703 [2024-10-08 18:44:35.638558] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.703 [2024-10-08 18:44:35.647988] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.703 [2024-10-08 18:44:35.648466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.703 [2024-10-08 18:44:35.648479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.703 [2024-10-08 18:44:35.648485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.703 [2024-10-08 18:44:35.648633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.703 [2024-10-08 18:44:35.648781] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.703 [2024-10-08 18:44:35.648787] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.703 [2024-10-08 18:44:35.648792] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.703 [2024-10-08 18:44:35.651184] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.703 [2024-10-08 18:44:35.660613] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.703 [2024-10-08 18:44:35.661020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.703 [2024-10-08 18:44:35.661039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.703 [2024-10-08 18:44:35.661045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.703 [2024-10-08 18:44:35.661201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.703 [2024-10-08 18:44:35.661350] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.703 [2024-10-08 18:44:35.661356] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.703 [2024-10-08 18:44:35.661361] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.703 [2024-10-08 18:44:35.663754] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1404620 Killed "${NVMF_APP[@]}" "$@" 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.703 [2024-10-08 18:44:35.673188] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.703 [2024-10-08 18:44:35.673713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.703 [2024-10-08 18:44:35.673726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.703 [2024-10-08 18:44:35.673731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.703 [2024-10-08 18:44:35.673880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.703 [2024-10-08 18:44:35.674032] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.703 [2024-10-08 18:44:35.674038] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.703 [2024-10-08 18:44:35.674043] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.703 [2024-10-08 18:44:35.676432] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1406220 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1406220 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1406220 ']' 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.703 18:44:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.703 [2024-10-08 18:44:35.685869] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.703 [2024-10-08 18:44:35.686320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.703 [2024-10-08 18:44:35.686349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.703 [2024-10-08 18:44:35.686362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.703 [2024-10-08 18:44:35.686526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.703 [2024-10-08 18:44:35.686677] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.703 [2024-10-08 18:44:35.686683] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.703 [2024-10-08 18:44:35.686689] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.703 [2024-10-08 18:44:35.689092] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.703 [2024-10-08 18:44:35.698535] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.703 [2024-10-08 18:44:35.698995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.703 [2024-10-08 18:44:35.699011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.703 [2024-10-08 18:44:35.699017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.703 [2024-10-08 18:44:35.699166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.703 [2024-10-08 18:44:35.699314] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.703 [2024-10-08 18:44:35.699321] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.703 [2024-10-08 18:44:35.699327] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.703 [2024-10-08 18:44:35.701716] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.703 [2024-10-08 18:44:35.711158] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.703 [2024-10-08 18:44:35.711597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.703 [2024-10-08 18:44:35.711628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.703 [2024-10-08 18:44:35.711636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.703 [2024-10-08 18:44:35.711801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.703 [2024-10-08 18:44:35.711951] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.703 [2024-10-08 18:44:35.711958] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.703 [2024-10-08 18:44:35.711963] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.703 [2024-10-08 18:44:35.714364] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.703 [2024-10-08 18:44:35.723829] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.703 [2024-10-08 18:44:35.724405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.703 [2024-10-08 18:44:35.724436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.703 [2024-10-08 18:44:35.724445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.703 [2024-10-08 18:44:35.724609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.703 [2024-10-08 18:44:35.724760] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.703 [2024-10-08 18:44:35.724770] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.703 [2024-10-08 18:44:35.724775] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.703 [2024-10-08 18:44:35.727175] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.703 [2024-10-08 18:44:35.729697] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:28:41.704 [2024-10-08 18:44:35.729741] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.704 [2024-10-08 18:44:35.736476] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.704 [2024-10-08 18:44:35.737094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.704 [2024-10-08 18:44:35.737124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.704 [2024-10-08 18:44:35.737133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.704 [2024-10-08 18:44:35.737300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.704 [2024-10-08 18:44:35.737451] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.704 [2024-10-08 18:44:35.737457] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.704 [2024-10-08 18:44:35.737463] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.704 [2024-10-08 18:44:35.739864] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.704 [2024-10-08 18:44:35.749170] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.704 [2024-10-08 18:44:35.749746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.704 [2024-10-08 18:44:35.749776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.704 [2024-10-08 18:44:35.749785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.704 [2024-10-08 18:44:35.749949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.704 [2024-10-08 18:44:35.750106] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.704 [2024-10-08 18:44:35.750113] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.704 [2024-10-08 18:44:35.750118] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.704 [2024-10-08 18:44:35.752511] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.965 [2024-10-08 18:44:35.761812] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.965 [2024-10-08 18:44:35.762388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-10-08 18:44:35.762418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.965 [2024-10-08 18:44:35.762427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.965 [2024-10-08 18:44:35.762592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.965 [2024-10-08 18:44:35.762744] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.965 [2024-10-08 18:44:35.762750] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.965 [2024-10-08 18:44:35.762759] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.965 [2024-10-08 18:44:35.765165] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.965 [2024-10-08 18:44:35.774403] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.965 [2024-10-08 18:44:35.774769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-10-08 18:44:35.774785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.965 [2024-10-08 18:44:35.774791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.965 [2024-10-08 18:44:35.774941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.965 [2024-10-08 18:44:35.775095] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.965 [2024-10-08 18:44:35.775101] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.965 [2024-10-08 18:44:35.775106] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.965 [2024-10-08 18:44:35.777496] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.965 [2024-10-08 18:44:35.787078] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.965 [2024-10-08 18:44:35.787561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-10-08 18:44:35.787574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.965 [2024-10-08 18:44:35.787579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.965 [2024-10-08 18:44:35.787727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.965 [2024-10-08 18:44:35.787875] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.965 [2024-10-08 18:44:35.787881] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.965 [2024-10-08 18:44:35.787887] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.965 [2024-10-08 18:44:35.790279] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.965 [2024-10-08 18:44:35.799714] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.965 [2024-10-08 18:44:35.800166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-10-08 18:44:35.800178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.965 [2024-10-08 18:44:35.800184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.965 [2024-10-08 18:44:35.800332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.965 [2024-10-08 18:44:35.800480] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.965 [2024-10-08 18:44:35.800485] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.965 [2024-10-08 18:44:35.800491] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.965 [2024-10-08 18:44:35.802879] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.965 [2024-10-08 18:44:35.812318] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.965 [2024-10-08 18:44:35.812784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-10-08 18:44:35.812797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.965 [2024-10-08 18:44:35.812803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.965 [2024-10-08 18:44:35.812951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.965 [2024-10-08 18:44:35.813104] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.965 [2024-10-08 18:44:35.813110] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.965 [2024-10-08 18:44:35.813115] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.965 [2024-10-08 18:44:35.813784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:41.965 [2024-10-08 18:44:35.815503] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.965 [2024-10-08 18:44:35.824964] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.965 [2024-10-08 18:44:35.825460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-10-08 18:44:35.825492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.965 [2024-10-08 18:44:35.825501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.965 [2024-10-08 18:44:35.825666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.965 [2024-10-08 18:44:35.825817] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.965 [2024-10-08 18:44:35.825824] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.965 [2024-10-08 18:44:35.825829] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.965 [2024-10-08 18:44:35.828229] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.965 [2024-10-08 18:44:35.837535] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.965 [2024-10-08 18:44:35.838047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-10-08 18:44:35.838078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.965 [2024-10-08 18:44:35.838087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.965 [2024-10-08 18:44:35.838252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.965 [2024-10-08 18:44:35.838402] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.965 [2024-10-08 18:44:35.838410] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.965 [2024-10-08 18:44:35.838416] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.965 [2024-10-08 18:44:35.840817] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.965 [2024-10-08 18:44:35.850120] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.965 [2024-10-08 18:44:35.850715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-10-08 18:44:35.850746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.965 [2024-10-08 18:44:35.850755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.965 [2024-10-08 18:44:35.850925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.965 [2024-10-08 18:44:35.851083] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.965 [2024-10-08 18:44:35.851090] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.965 [2024-10-08 18:44:35.851096] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.965 [2024-10-08 18:44:35.853489] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.965 [2024-10-08 18:44:35.862840] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.965 [2024-10-08 18:44:35.863320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-10-08 18:44:35.863334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.965 [2024-10-08 18:44:35.863339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.965 [2024-10-08 18:44:35.863489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.965 [2024-10-08 18:44:35.863636] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.965 [2024-10-08 18:44:35.863642] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.966 [2024-10-08 18:44:35.863648] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.966 [2024-10-08 18:44:35.866043] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.966 [2024-10-08 18:44:35.866676] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.966 [2024-10-08 18:44:35.866699] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.966 [2024-10-08 18:44:35.866705] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.966 [2024-10-08 18:44:35.866711] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.966 [2024-10-08 18:44:35.866715] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.966 [2024-10-08 18:44:35.867549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.966 [2024-10-08 18:44:35.867697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.966 [2024-10-08 18:44:35.867700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.966 [2024-10-08 18:44:35.875483] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.966 [2024-10-08 18:44:35.876038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-10-08 18:44:35.876069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.966 [2024-10-08 18:44:35.876079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.966 [2024-10-08 18:44:35.876247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.966 [2024-10-08 18:44:35.876398] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.966 [2024-10-08 18:44:35.876405] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.966 [2024-10-08 18:44:35.876410] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.966 [2024-10-08 18:44:35.878811] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.966 [2024-10-08 18:44:35.888117] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.966 [2024-10-08 18:44:35.888640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-10-08 18:44:35.888655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.966 [2024-10-08 18:44:35.888661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.966 [2024-10-08 18:44:35.888810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.966 [2024-10-08 18:44:35.888958] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.966 [2024-10-08 18:44:35.888964] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.966 [2024-10-08 18:44:35.888969] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.966 [2024-10-08 18:44:35.891364] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.966 [2024-10-08 18:44:35.900801] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.966 [2024-10-08 18:44:35.901292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-10-08 18:44:35.901305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.966 [2024-10-08 18:44:35.901311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.966 [2024-10-08 18:44:35.901460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.966 [2024-10-08 18:44:35.901608] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.966 [2024-10-08 18:44:35.901613] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.966 [2024-10-08 18:44:35.901619] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.966 [2024-10-08 18:44:35.904012] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.966 [2024-10-08 18:44:35.913452] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.966 [2024-10-08 18:44:35.913926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-10-08 18:44:35.913939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.966 [2024-10-08 18:44:35.913945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.966 [2024-10-08 18:44:35.914097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.966 [2024-10-08 18:44:35.914245] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.966 [2024-10-08 18:44:35.914251] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.966 [2024-10-08 18:44:35.914256] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.966 [2024-10-08 18:44:35.916645] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.966 [2024-10-08 18:44:35.926104] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.966 [2024-10-08 18:44:35.926688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-10-08 18:44:35.926720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.966 [2024-10-08 18:44:35.926729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.966 [2024-10-08 18:44:35.926900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.966 [2024-10-08 18:44:35.927057] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.966 [2024-10-08 18:44:35.927064] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.966 [2024-10-08 18:44:35.927069] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.966 [2024-10-08 18:44:35.929461] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.966 [2024-10-08 18:44:35.938754] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.966 [2024-10-08 18:44:35.939123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-10-08 18:44:35.939139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.966 [2024-10-08 18:44:35.939145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.966 [2024-10-08 18:44:35.939294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.966 [2024-10-08 18:44:35.939443] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.966 [2024-10-08 18:44:35.939449] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.966 [2024-10-08 18:44:35.939454] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.966 [2024-10-08 18:44:35.941845] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.966 [2024-10-08 18:44:35.951425] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.966 [2024-10-08 18:44:35.951875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-10-08 18:44:35.951888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.966 [2024-10-08 18:44:35.951894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.966 [2024-10-08 18:44:35.952047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.966 [2024-10-08 18:44:35.952195] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.966 [2024-10-08 18:44:35.952201] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.966 [2024-10-08 18:44:35.952206] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.966 [2024-10-08 18:44:35.954594] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.966 [2024-10-08 18:44:35.964029] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.966 [2024-10-08 18:44:35.964591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-10-08 18:44:35.964622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.966 [2024-10-08 18:44:35.964631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.966 [2024-10-08 18:44:35.964796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.966 [2024-10-08 18:44:35.964947] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.966 [2024-10-08 18:44:35.964953] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.966 [2024-10-08 18:44:35.964963] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.966 [2024-10-08 18:44:35.967364] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.966 [2024-10-08 18:44:35.976664] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.966 [2024-10-08 18:44:35.977090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-10-08 18:44:35.977121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.966 [2024-10-08 18:44:35.977130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.966 [2024-10-08 18:44:35.977297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.966 [2024-10-08 18:44:35.977448] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.966 [2024-10-08 18:44:35.977454] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.966 [2024-10-08 18:44:35.977459] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.966 [2024-10-08 18:44:35.979857] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.966 [2024-10-08 18:44:35.989298] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.966 [2024-10-08 18:44:35.989798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-10-08 18:44:35.989814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.966 [2024-10-08 18:44:35.989819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.966 [2024-10-08 18:44:35.989968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.967 [2024-10-08 18:44:35.990122] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.967 [2024-10-08 18:44:35.990128] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.967 [2024-10-08 18:44:35.990133] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.967 [2024-10-08 18:44:35.992522] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.967 [2024-10-08 18:44:36.001957] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.967 [2024-10-08 18:44:36.002523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-10-08 18:44:36.002554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.967 [2024-10-08 18:44:36.002563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.967 [2024-10-08 18:44:36.002728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.967 [2024-10-08 18:44:36.002879] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.967 [2024-10-08 18:44:36.002885] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.967 [2024-10-08 18:44:36.002891] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.967 [2024-10-08 18:44:36.005290] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.967 4918.00 IOPS, 19.21 MiB/s [2024-10-08T16:44:36.024Z] [2024-10-08 18:44:36.015743] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.967 [2024-10-08 18:44:36.016321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-10-08 18:44:36.016352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:41.967 [2024-10-08 18:44:36.016361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:41.967 [2024-10-08 18:44:36.016525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:41.967 [2024-10-08 18:44:36.016677] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.967 [2024-10-08 18:44:36.016683] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.967 [2024-10-08 18:44:36.016689] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.967 [2024-10-08 18:44:36.019087] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.028407] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.028759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.028773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.228 [2024-10-08 18:44:36.028779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.228 [2024-10-08 18:44:36.028928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.228 [2024-10-08 18:44:36.029081] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.228 [2024-10-08 18:44:36.029087] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.228 [2024-10-08 18:44:36.029092] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.228 [2024-10-08 18:44:36.031482] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.041059] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.041608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.041639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.228 [2024-10-08 18:44:36.041648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.228 [2024-10-08 18:44:36.041812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.228 [2024-10-08 18:44:36.041963] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.228 [2024-10-08 18:44:36.041969] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.228 [2024-10-08 18:44:36.041981] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.228 [2024-10-08 18:44:36.044377] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.053673] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.054044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.054060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.228 [2024-10-08 18:44:36.054065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.228 [2024-10-08 18:44:36.054214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.228 [2024-10-08 18:44:36.054367] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.228 [2024-10-08 18:44:36.054373] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.228 [2024-10-08 18:44:36.054378] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.228 [2024-10-08 18:44:36.056765] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.066342] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.066837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.066850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.228 [2024-10-08 18:44:36.066855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.228 [2024-10-08 18:44:36.067007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.228 [2024-10-08 18:44:36.067155] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.228 [2024-10-08 18:44:36.067161] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.228 [2024-10-08 18:44:36.067165] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.228 [2024-10-08 18:44:36.069553] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.078985] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.079528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.079559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.228 [2024-10-08 18:44:36.079568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.228 [2024-10-08 18:44:36.079732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.228 [2024-10-08 18:44:36.079883] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.228 [2024-10-08 18:44:36.079890] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.228 [2024-10-08 18:44:36.079895] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.228 [2024-10-08 18:44:36.082294] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.091591] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.092073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.092104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.228 [2024-10-08 18:44:36.092113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.228 [2024-10-08 18:44:36.092277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.228 [2024-10-08 18:44:36.092428] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.228 [2024-10-08 18:44:36.092434] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.228 [2024-10-08 18:44:36.092439] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.228 [2024-10-08 18:44:36.094843] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.104285] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.104790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.104805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.228 [2024-10-08 18:44:36.104811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.228 [2024-10-08 18:44:36.104959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.228 [2024-10-08 18:44:36.105112] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.228 [2024-10-08 18:44:36.105119] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.228 [2024-10-08 18:44:36.105124] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.228 [2024-10-08 18:44:36.107512] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.116945] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.117555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.117585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.228 [2024-10-08 18:44:36.117594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.228 [2024-10-08 18:44:36.117759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.228 [2024-10-08 18:44:36.117909] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.228 [2024-10-08 18:44:36.117916] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.228 [2024-10-08 18:44:36.117921] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.228 [2024-10-08 18:44:36.120328] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.129633] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.129982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.129997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.228 [2024-10-08 18:44:36.130003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.228 [2024-10-08 18:44:36.130152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.228 [2024-10-08 18:44:36.130300] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.228 [2024-10-08 18:44:36.130306] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.228 [2024-10-08 18:44:36.130311] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.228 [2024-10-08 18:44:36.132700] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.142281] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.142845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.142879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.228 [2024-10-08 18:44:36.142888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.228 [2024-10-08 18:44:36.143058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.228 [2024-10-08 18:44:36.143210] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.228 [2024-10-08 18:44:36.143216] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.228 [2024-10-08 18:44:36.143221] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.228 [2024-10-08 18:44:36.145614] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.228 [2024-10-08 18:44:36.154910] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.228 [2024-10-08 18:44:36.155513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.228 [2024-10-08 18:44:36.155544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.155553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.155719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.155870] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.155876] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.155881] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.229 [2024-10-08 18:44:36.158281] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.229 [2024-10-08 18:44:36.167580] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.229 [2024-10-08 18:44:36.168082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.229 [2024-10-08 18:44:36.168098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.168104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.168253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.168401] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.168406] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.168411] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.229 [2024-10-08 18:44:36.170801] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.229 [2024-10-08 18:44:36.180234] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.229 [2024-10-08 18:44:36.180685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.229 [2024-10-08 18:44:36.180697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.180703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.180851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.181006] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.181012] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.181017] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.229 [2024-10-08 18:44:36.183405] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.229 [2024-10-08 18:44:36.192836] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.229 [2024-10-08 18:44:36.193473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.229 [2024-10-08 18:44:36.193504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.193513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.193678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.193828] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.193835] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.193840] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.229 [2024-10-08 18:44:36.196241] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.229 [2024-10-08 18:44:36.205537] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.229 [2024-10-08 18:44:36.206077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.229 [2024-10-08 18:44:36.206108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.206116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.206283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.206434] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.206440] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.206446] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.229 [2024-10-08 18:44:36.208846] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.229 [2024-10-08 18:44:36.218148] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.229 [2024-10-08 18:44:36.218762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.229 [2024-10-08 18:44:36.218793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.218802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.218966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.219123] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.219130] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.219135] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.229 [2024-10-08 18:44:36.221549] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.229 [2024-10-08 18:44:36.230714] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.229 [2024-10-08 18:44:36.231115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.229 [2024-10-08 18:44:36.231146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.231154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.231319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.231471] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.231477] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.231482] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.229 [2024-10-08 18:44:36.233883] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.229 [2024-10-08 18:44:36.243323] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.229 [2024-10-08 18:44:36.243903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.229 [2024-10-08 18:44:36.243933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.243942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.244115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.244267] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.244273] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.244279] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.229 [2024-10-08 18:44:36.246669] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.229 [2024-10-08 18:44:36.255960] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.229 [2024-10-08 18:44:36.256591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.229 [2024-10-08 18:44:36.256621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.256630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.256794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.256945] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.256952] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.256957] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.229 [2024-10-08 18:44:36.259358] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.229 [2024-10-08 18:44:36.268653] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.229 [2024-10-08 18:44:36.269295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.229 [2024-10-08 18:44:36.269325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.269337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.269501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.269652] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.269658] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.269664] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.229 [2024-10-08 18:44:36.272063] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.229 [2024-10-08 18:44:36.281353] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.229 [2024-10-08 18:44:36.281935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.229 [2024-10-08 18:44:36.281965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.229 [2024-10-08 18:44:36.281979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.229 [2024-10-08 18:44:36.282146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.229 [2024-10-08 18:44:36.282297] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.229 [2024-10-08 18:44:36.282303] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.229 [2024-10-08 18:44:36.282309] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.491 [2024-10-08 18:44:36.284702] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.491 [2024-10-08 18:44:36.293998] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.491 [2024-10-08 18:44:36.294596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.491 [2024-10-08 18:44:36.294626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.491 [2024-10-08 18:44:36.294635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.491 [2024-10-08 18:44:36.294800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.491 [2024-10-08 18:44:36.294950] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.491 [2024-10-08 18:44:36.294956] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.491 [2024-10-08 18:44:36.294962] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.491 [2024-10-08 18:44:36.297361] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.491 [2024-10-08 18:44:36.306650] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.491 [2024-10-08 18:44:36.307272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.491 [2024-10-08 18:44:36.307303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.491 [2024-10-08 18:44:36.307312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.491 [2024-10-08 18:44:36.307476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.491 [2024-10-08 18:44:36.307626] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.307637] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.307643] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.310042] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.319335] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.319918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.319948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.319957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.492 [2024-10-08 18:44:36.320136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.492 [2024-10-08 18:44:36.320287] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.320294] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.320299] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.322702] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.331996] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.332552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.332583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.332592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.492 [2024-10-08 18:44:36.332756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.492 [2024-10-08 18:44:36.332907] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.332913] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.332919] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.335316] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.344603] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.344963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.344982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.344989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.492 [2024-10-08 18:44:36.345137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.492 [2024-10-08 18:44:36.345285] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.345291] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.345296] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.347686] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.357256] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.357490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.357501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.357506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.492 [2024-10-08 18:44:36.357654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.492 [2024-10-08 18:44:36.357802] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.357807] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.357814] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.360205] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.369915] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.370502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.370533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.370542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.492 [2024-10-08 18:44:36.370707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.492 [2024-10-08 18:44:36.370857] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.370864] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.370869] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.373267] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.382613] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.383207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.383238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.383247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.492 [2024-10-08 18:44:36.383411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.492 [2024-10-08 18:44:36.383563] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.383569] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.383574] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.385976] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.395269] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.395863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.395893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.395902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.492 [2024-10-08 18:44:36.396076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.492 [2024-10-08 18:44:36.396228] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.396235] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.396240] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.398634] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.407926] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.408400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.408431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.408440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.492 [2024-10-08 18:44:36.408605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.492 [2024-10-08 18:44:36.408755] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.408761] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.408767] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.411165] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.420604] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.421030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.421066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.421075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.492 [2024-10-08 18:44:36.421241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.492 [2024-10-08 18:44:36.421392] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.421398] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.421404] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.423809] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.433249] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.433838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.433869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.433878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.492 [2024-10-08 18:44:36.434048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.492 [2024-10-08 18:44:36.434200] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.492 [2024-10-08 18:44:36.434206] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.492 [2024-10-08 18:44:36.434216] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.492 [2024-10-08 18:44:36.436608] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.492 [2024-10-08 18:44:36.445903] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.492 [2024-10-08 18:44:36.446466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.492 [2024-10-08 18:44:36.446497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.492 [2024-10-08 18:44:36.446506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.493 [2024-10-08 18:44:36.446670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.493 [2024-10-08 18:44:36.446821] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.493 [2024-10-08 18:44:36.446828] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.493 [2024-10-08 18:44:36.446833] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.493 [2024-10-08 18:44:36.449229] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.493 [2024-10-08 18:44:36.458522] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.493 [2024-10-08 18:44:36.458959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.493 [2024-10-08 18:44:36.458977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.493 [2024-10-08 18:44:36.458983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.493 [2024-10-08 18:44:36.459132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.493 [2024-10-08 18:44:36.459281] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.493 [2024-10-08 18:44:36.459287] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.493 [2024-10-08 18:44:36.459292] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.493 [2024-10-08 18:44:36.461678] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.493 [2024-10-08 18:44:36.471106] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.493 [2024-10-08 18:44:36.471564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.493 [2024-10-08 18:44:36.471577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.493 [2024-10-08 18:44:36.471583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.493 [2024-10-08 18:44:36.471731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.493 [2024-10-08 18:44:36.471879] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.493 [2024-10-08 18:44:36.471885] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.493 [2024-10-08 18:44:36.471890] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.493 [2024-10-08 18:44:36.474282] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.493 [2024-10-08 18:44:36.483709] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.493 [2024-10-08 18:44:36.484272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.493 [2024-10-08 18:44:36.484306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.493 [2024-10-08 18:44:36.484315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.493 [2024-10-08 18:44:36.484482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.493 [2024-10-08 18:44:36.484634] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.493 [2024-10-08 18:44:36.484640] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.493 [2024-10-08 18:44:36.484647] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.493 [2024-10-08 18:44:36.487049] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.493 [2024-10-08 18:44:36.496525] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.493 [2024-10-08 18:44:36.497103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.493 [2024-10-08 18:44:36.497133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.493 [2024-10-08 18:44:36.497142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.493 [2024-10-08 18:44:36.497309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.493 [2024-10-08 18:44:36.497459] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.493 [2024-10-08 18:44:36.497465] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.493 [2024-10-08 18:44:36.497470] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.493 [2024-10-08 18:44:36.499868] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.493 [2024-10-08 18:44:36.509160] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.493 [2024-10-08 18:44:36.509807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.493 [2024-10-08 18:44:36.509838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.493 [2024-10-08 18:44:36.509846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.493 [2024-10-08 18:44:36.510016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.493 [2024-10-08 18:44:36.510167] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.493 [2024-10-08 18:44:36.510173] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.493 [2024-10-08 18:44:36.510179] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.493 [2024-10-08 18:44:36.512571] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.493 [2024-10-08 18:44:36.521730] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.493 [2024-10-08 18:44:36.522287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.493 [2024-10-08 18:44:36.522318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.493 [2024-10-08 18:44:36.522327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.493 [2024-10-08 18:44:36.522502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.493 [2024-10-08 18:44:36.522657] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.493 [2024-10-08 18:44:36.522663] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.493 [2024-10-08 18:44:36.522669] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.493 [2024-10-08 18:44:36.525068] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.493 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.493 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:42.493 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:42.493 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:42.493 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.493 [2024-10-08 18:44:36.534370] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.493 [2024-10-08 18:44:36.534940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.493 [2024-10-08 18:44:36.534971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.493 [2024-10-08 18:44:36.534985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.493 [2024-10-08 18:44:36.535152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.493 [2024-10-08 18:44:36.535303] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.493 [2024-10-08 18:44:36.535310] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.493 [2024-10-08 18:44:36.535316] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.493 [2024-10-08 18:44:36.537710] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.493 [2024-10-08 18:44:36.547013] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.753 [2024-10-08 18:44:36.547613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.753 [2024-10-08 18:44:36.547644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.753 [2024-10-08 18:44:36.547653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.753 [2024-10-08 18:44:36.547818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.754 [2024-10-08 18:44:36.547970] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.754 [2024-10-08 18:44:36.547983] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.754 [2024-10-08 18:44:36.547989] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.754 [2024-10-08 18:44:36.550383] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.754 [2024-10-08 18:44:36.559679] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.754 [2024-10-08 18:44:36.560311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.754 [2024-10-08 18:44:36.560342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.754 [2024-10-08 18:44:36.560351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.754 [2024-10-08 18:44:36.560516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.754 [2024-10-08 18:44:36.560671] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.754 [2024-10-08 18:44:36.560678] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.754 [2024-10-08 18:44:36.560683] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.754 [2024-10-08 18:44:36.563085] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.754 [2024-10-08 18:44:36.572247] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.754 [2024-10-08 18:44:36.572613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.754 [2024-10-08 18:44:36.572629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.754 [2024-10-08 18:44:36.572635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.754 [2024-10-08 18:44:36.572785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.754 [2024-10-08 18:44:36.572933] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.754 [2024-10-08 18:44:36.572939] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.754 [2024-10-08 18:44:36.572944] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.754 [2024-10-08 18:44:36.575338] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.754 [2024-10-08 18:44:36.575790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.754 [2024-10-08 18:44:36.584909] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.754 [2024-10-08 18:44:36.585375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.754 [2024-10-08 18:44:36.585388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.754 [2024-10-08 18:44:36.585393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.754 [2024-10-08 18:44:36.585542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.754 [2024-10-08 18:44:36.585689] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.754 [2024-10-08 18:44:36.585695] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.754 [2024-10-08 18:44:36.585700] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.754 [2024-10-08 18:44:36.588091] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.754 [2024-10-08 18:44:36.597520] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.754 [2024-10-08 18:44:36.598010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.754 [2024-10-08 18:44:36.598030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.754 [2024-10-08 18:44:36.598036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.754 [2024-10-08 18:44:36.598189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.754 [2024-10-08 18:44:36.598338] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.754 [2024-10-08 18:44:36.598344] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.754 [2024-10-08 18:44:36.598349] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.754 [2024-10-08 18:44:36.600744] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.754 Malloc0 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.754 [2024-10-08 18:44:36.610174] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.754 [2024-10-08 18:44:36.610645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.754 [2024-10-08 18:44:36.610658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.754 [2024-10-08 18:44:36.610663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.754 [2024-10-08 18:44:36.610811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.754 [2024-10-08 18:44:36.610959] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.754 [2024-10-08 18:44:36.610964] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.754 [2024-10-08 18:44:36.610969] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.754 [2024-10-08 18:44:36.613360] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.754 [2024-10-08 18:44:36.622803] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.754 [2024-10-08 18:44:36.623270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.754 [2024-10-08 18:44:36.623282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.754 [2024-10-08 18:44:36.623288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.754 [2024-10-08 18:44:36.623436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.754 [2024-10-08 18:44:36.623584] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.754 [2024-10-08 18:44:36.623590] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.754 [2024-10-08 18:44:36.623601] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.754 [2024-10-08 18:44:36.625992] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.754 [2024-10-08 18:44:36.635418] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.754 [2024-10-08 18:44:36.635869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.754 [2024-10-08 18:44:36.635881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec100 with addr=10.0.0.2, port=4420 00:28:42.754 [2024-10-08 18:44:36.635887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec100 is same with the state(6) to be set 00:28:42.754 [2024-10-08 18:44:36.636038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec100 (9): Bad file descriptor 00:28:42.754 [2024-10-08 18:44:36.636186] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.754 [2024-10-08 18:44:36.636192] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.754 [2024-10-08 18:44:36.636197] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.754 [2024-10-08 18:44:36.638006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.754 [2024-10-08 18:44:36.638583] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.754 18:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1405140 00:28:42.754 [2024-10-08 18:44:36.648010] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.754 [2024-10-08 18:44:36.762863] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:44.391 4699.71 IOPS, 18.36 MiB/s [2024-10-08T16:44:39.388Z] 5725.12 IOPS, 22.36 MiB/s [2024-10-08T16:44:40.327Z] 6534.78 IOPS, 25.53 MiB/s [2024-10-08T16:44:41.268Z] 7183.80 IOPS, 28.06 MiB/s [2024-10-08T16:44:42.208Z] 7693.09 IOPS, 30.05 MiB/s [2024-10-08T16:44:43.148Z] 8136.00 IOPS, 31.78 MiB/s [2024-10-08T16:44:44.088Z] 8487.31 IOPS, 33.15 MiB/s [2024-10-08T16:44:45.472Z] 8801.00 IOPS, 34.38 MiB/s [2024-10-08T16:44:45.472Z] 9076.07 IOPS, 35.45 MiB/s 00:28:51.415 Latency(us) 00:28:51.415 [2024-10-08T16:44:45.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.415 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:51.415 Verification LBA range: start 0x0 length 0x4000 00:28:51.415 Nvme1n1 : 15.01 9079.06 35.47 13497.29 0.00 5650.58 546.13 15510.19 00:28:51.415 [2024-10-08T16:44:45.472Z] =================================================================================================================== 00:28:51.415 [2024-10-08T16:44:45.472Z] Total : 9079.06 35.47 13497.29 0.00 5650.58 546.13 15510.19 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.415 rmmod nvme_tcp 00:28:51.415 rmmod nvme_fabrics 00:28:51.415 rmmod nvme_keyring 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1406220 ']' 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1406220 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1406220 ']' 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1406220 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1406220 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1406220' 00:28:51.415 killing process with pid 1406220 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1406220 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1406220 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.415 18:44:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.957 18:44:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.957 00:28:53.957 real 0m28.444s 00:28:53.957 user 1m3.145s 00:28:53.957 sys 0m7.813s 00:28:53.957 18:44:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.957 18:44:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:53.957 ************************************ 00:28:53.957 END TEST nvmf_bdevperf 00:28:53.957 ************************************ 00:28:53.957 18:44:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:53.957 18:44:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:53.957 18:44:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:53.957 18:44:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.957 ************************************ 00:28:53.957 START TEST nvmf_target_disconnect 00:28:53.957 ************************************ 00:28:53.957 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:53.957 * Looking for test storage... 00:28:53.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:53.957 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:53.957 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:53.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.958 --rc genhtml_branch_coverage=1 00:28:53.958 --rc genhtml_function_coverage=1 00:28:53.958 --rc genhtml_legend=1 00:28:53.958 --rc geninfo_all_blocks=1 00:28:53.958 --rc geninfo_unexecuted_blocks=1 00:28:53.958 00:28:53.958 ' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:53.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.958 --rc genhtml_branch_coverage=1 00:28:53.958 --rc genhtml_function_coverage=1 00:28:53.958 --rc genhtml_legend=1 00:28:53.958 --rc geninfo_all_blocks=1 00:28:53.958 --rc geninfo_unexecuted_blocks=1 00:28:53.958 00:28:53.958 ' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:53.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.958 --rc genhtml_branch_coverage=1 00:28:53.958 --rc genhtml_function_coverage=1 00:28:53.958 --rc genhtml_legend=1 00:28:53.958 --rc geninfo_all_blocks=1 00:28:53.958 --rc geninfo_unexecuted_blocks=1 00:28:53.958 00:28:53.958 ' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:53.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.958 --rc genhtml_branch_coverage=1 00:28:53.958 --rc genhtml_function_coverage=1 00:28:53.958 --rc genhtml_legend=1 00:28:53.958 --rc geninfo_all_blocks=1 00:28:53.958 --rc geninfo_unexecuted_blocks=1 00:28:53.958 00:28:53.958 ' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:53.958 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:53.959 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:53.959 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.959 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.959 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.959 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:53.959 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:53.959 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.959 18:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:02.093 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:02.093 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:02.093 Found net devices under 0000:31:00.0: cvl_0_0 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:02.093 Found net devices under 0000:31:00.1: cvl_0_1 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:02.093 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:02.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:29:02.094 00:29:02.094 --- 10.0.0.2 ping statistics --- 00:29:02.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.094 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:29:02.094 00:29:02.094 --- 10.0.0.1 ping statistics --- 00:29:02.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.094 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:02.094 ************************************ 00:29:02.094 START TEST nvmf_target_disconnect_tc1 00:29:02.094 ************************************ 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.094 [2024-10-08 18:44:55.622989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.094 [2024-10-08 18:44:55.623057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1201dc0 with addr=10.0.0.2, port=4420 00:29:02.094 [2024-10-08 18:44:55.623090] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:02.094 [2024-10-08 18:44:55.623106] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:02.094 [2024-10-08 18:44:55.623115] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:02.094 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:02.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:02.094 Initializing NVMe Controllers 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:02.094 00:29:02.094 real 0m0.131s 00:29:02.094 user 0m0.058s 00:29:02.094 sys 0m0.073s 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:02.094 ************************************ 00:29:02.094 END TEST nvmf_target_disconnect_tc1 00:29:02.094 ************************************ 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:02.094 ************************************ 00:29:02.094 START TEST nvmf_target_disconnect_tc2 00:29:02.094 ************************************ 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1412481 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1412481 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1412481 ']' 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.094 18:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.094 [2024-10-08 18:44:55.784391] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:29:02.095 [2024-10-08 18:44:55.784448] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.095 [2024-10-08 18:44:55.850818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.095 [2024-10-08 18:44:55.934650] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.095 [2024-10-08 18:44:55.934710] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.095 [2024-10-08 18:44:55.934718] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.095 [2024-10-08 18:44:55.934723] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.095 [2024-10-08 18:44:55.934727] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.095 [2024-10-08 18:44:55.936593] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:29:02.095 [2024-10-08 18:44:55.936757] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:29:02.095 [2024-10-08 18:44:55.936923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:29:02.095 [2024-10-08 18:44:55.936924] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.095 Malloc0 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.095 [2024-10-08 18:44:56.130841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.095 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.354 [2024-10-08 18:44:56.171258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1412618 00:29:02.354 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.355 18:44:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:04.270 18:44:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1412481 00:29:04.270 18:44:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 [2024-10-08 18:44:58.209769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Write completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.270 Read completed with error (sct=0, sc=8) 00:29:04.270 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 [2024-10-08 18:44:58.210067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Write completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 Read completed with error (sct=0, sc=8) 00:29:04.271 starting I/O failed 00:29:04.271 [2024-10-08 18:44:58.210389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.271 [2024-10-08 18:44:58.210865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.210900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.211346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.211405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.211786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.211801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.212046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.212060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.212487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.212546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.212816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.212832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.213183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.213198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.213540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.213551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.213782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.213794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.214022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.214035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.214345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.214357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.214681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.214693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.214877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.214889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.215244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.215256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.215599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.215611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.215937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.215949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.216319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.216332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.216551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.216563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.216875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.216887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.217221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.217234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.217556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.217568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.217916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.217928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-10-08 18:44:58.218267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-10-08 18:44:58.218279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.218609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.218621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.218880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.218893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.219132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.219147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.219468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.219480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.219821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.219832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.220166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.220177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.220482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.220493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.220820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.220830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.221170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.221182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.221503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.221514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.221686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.221696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.222011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.222025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.222409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.222420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.222725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.222739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.223045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.223055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.223360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.223371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.223718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.223729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.224073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.224085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.224332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.224344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.224494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.224505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.224781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.224791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.225140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.225151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.225450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.225460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.225772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.225782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.226115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.226126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.226436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.226446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.226765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.226776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.226996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.227008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.227379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.227398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.227621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.227632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.227993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.228005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.228395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.228407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.228721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.228733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.229058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.229070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.229405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.229417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.229731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.229741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.230074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.230087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.230414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.230425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.230757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.230768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-10-08 18:44:58.231108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-10-08 18:44:58.231119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.231273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.231285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.231540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.231550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.231785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.231795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.232041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.232053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.232306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.232317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.232656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.232667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.232860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.232872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.233193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.233206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.233526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.233540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.233884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.233896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.234302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.234315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.234635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.234648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.235011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.235024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.235406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.235422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.235620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.235634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.235999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.236013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.236367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.236380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.236703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.236716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.237050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.237065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.237336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.237349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.237561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.237574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.237960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.237973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.238335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.238358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.238669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.238683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.238995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.239009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.239249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.239263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.239576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.239588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.239886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.239899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.240216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.240229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.240513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.240525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.240877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.240890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.241129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.241142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.241477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.241490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.241653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.241667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.242050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.242063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.242386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.242399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.242722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.242734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.243137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.243150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.243439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.243452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-10-08 18:44:58.243764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-10-08 18:44:58.243776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.244105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.244122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.244307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.244322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.244543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.244555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.244950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.244964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.245283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.245295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.245596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.245616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.245924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.245936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.246251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.246265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.246569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.246582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.246891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.246904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.247193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.247206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.247603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.247619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.247932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.247950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.248327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.248345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.248678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.248697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.249043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.249060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.249410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.249426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.249762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.249778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.250147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.250164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.250498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.250515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.250815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.250839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.251158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.251175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.251489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.251505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.251832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.251848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.252200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.252219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.252544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.252560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.252978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.252994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.253347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.253364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.253604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.253621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.253940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.253956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.254321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.254338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.254578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.254594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.254919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.254935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.255177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.255194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.255534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.255550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-10-08 18:44:58.255866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-10-08 18:44:58.255882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.256215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.256232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.256579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.256596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.256911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.256927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.257267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.257284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.257610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.257632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.257824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.257842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.258166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.258183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.258486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.258504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.258763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.258779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.259163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.259184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.259541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.259562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.259912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.259932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.260268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.260291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.260627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.260648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.260855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.260878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.261252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.261274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.261647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.261669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.262039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.262060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.262473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.262495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.262847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.262868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.263200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.263222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.263454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.263476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.263814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.263835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.264217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.264238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.264598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.264618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.264957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.264985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.265215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.265236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.265477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.265499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.265872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.265894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.266115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.266139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.266536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.266558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.266903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.266925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.267349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.267370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.267715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.267737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.268054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.268076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.268282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.268306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.268508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.268531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.268861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.268883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.269217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-10-08 18:44:58.269247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-10-08 18:44:58.269617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.269638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.270035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.270057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.270369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.270396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.270775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.270804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.271186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.271216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.271578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.271612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.271840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.271872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.272118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.272147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.272523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.272552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.272911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.272940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.273355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.273385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.273748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.273776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.274148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.274178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.274540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.274567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.274942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.274970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.275315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.275345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.275707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.275735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.276101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.276132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.276500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.276528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.276905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.276935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.277320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.277350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.277726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.277757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.278117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.278146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.278514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.278542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.278914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.278942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.279325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.279357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.279730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.279757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.280117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.280147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.280507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.280537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.280837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.280867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.281211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.281241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.281621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.281650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.281800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.281832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.282173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.282204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.282540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.282569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.282920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.282949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.283308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.283337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.283697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.283726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.284079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.284109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.284472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.284499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-10-08 18:44:58.284855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-10-08 18:44:58.284882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.285253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.285283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.285570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.285597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.285982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.286012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.286357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.286387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.286747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.286782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.287129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.287158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.287519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.287548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.287920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.287947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.288305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.288334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.288692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.288720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.289102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.289133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.289511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.289539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.289899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.289927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.290360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.290390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.290643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.290670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.291060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.291090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.291464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.291494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.291844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.291872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.292225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.292257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.292596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.292624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.292873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.292904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.293275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.293304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.293656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.293685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.293933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.293962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.294405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.294434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.294827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.294856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.295201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.295231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.295558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.295587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.295953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.295990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.296403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.296431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.296793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.296823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.297166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.297196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.297564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.297592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.297944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.297971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.298348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.298377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.298742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.298770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.299129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.299158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.299526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.299554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.299966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-10-08 18:44:58.300004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-10-08 18:44:58.300379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.300407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.300783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.300811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.301157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.301186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.301559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.301586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.301835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.301863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.302234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.302270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.302516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.302547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.302812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.302840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.303196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.303225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.303447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.303479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.303698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.303730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.304071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.304101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.304465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.304494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.304860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.304889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.305240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.305269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.305635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.305664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.306028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.306057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.306420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.306447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.306813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.306842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.307066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.307098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.307486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.307515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.307888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.307916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.308275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.308304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.308648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.308676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.309040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.309070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.309455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.309484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.309849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.309877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.310236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.310267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.310629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.310657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.311018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.311047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.311407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.311436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.311692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.311721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.312080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.312110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.312534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.312562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.312913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.312942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.313304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.313333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.313698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.313726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.313994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.314023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.314425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.314453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.314798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-10-08 18:44:58.314826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-10-08 18:44:58.315169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.315198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.315456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.315483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.315831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.315859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.316216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.316244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.316608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.316636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.317007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.317042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.317449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.317477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.317849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.317877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.318129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.318158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.318539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.318566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.318921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.318950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.319362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.319392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.319759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.319788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.320137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.320166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.320525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.320555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.320897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.320926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.321284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.321313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-10-08 18:44:58.321677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-10-08 18:44:58.321705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.550 [2024-10-08 18:44:58.322072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.550 [2024-10-08 18:44:58.322104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.322499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.322529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.322923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.322951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.323410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.323440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.323776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.323804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.324136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.324166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.324464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.324492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.324741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.324774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.325133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.325164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.325532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.325561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.325928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.325956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.326327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.326355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.326722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.326750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.327099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.327128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.327365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.327393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.327743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.327771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.328151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.328180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.328439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.328466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.328818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.328846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.329206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.329242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.329588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.329617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.329964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.330009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.330349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.330378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.330706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.330733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.331103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.331132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.331499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.331527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.331865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.331894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.332250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.332285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.332625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.332656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.333017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.333046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.333431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.333459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.333830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.333858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.334108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.334142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.334389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.334421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.334767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.334796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.335044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.335074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.335436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.335463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.335834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.335861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.336267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.336296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.551 [2024-10-08 18:44:58.336695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.551 [2024-10-08 18:44:58.336722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.551 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.336986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.337015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.337390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.337418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.337796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.337824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.338211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.338240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.338497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.338526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.338769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.338800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.339170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.339200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.339561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.339589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.339950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.339987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.340351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.340379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.340726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.340753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.341109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.341139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.341383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.341411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.341778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.341805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.342150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.342181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.342520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.342548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.342914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.342942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.343302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.343331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.343683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.343710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.344069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.344098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.344462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.344490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.344716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.344746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.345117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.345146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.345483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.345511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.345932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.345960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.346239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.346269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.346616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.346644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.346889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.346924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.347211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.347242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.347610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.347639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.348006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.348035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.348403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.348431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.348811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.348839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.349207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.349236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.349594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.349622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.349973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.350010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.350334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.350362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.350710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.350737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.351107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.351137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.351500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.552 [2024-10-08 18:44:58.351529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.552 qpair failed and we were unable to recover it. 00:29:04.552 [2024-10-08 18:44:58.351879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.351907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.352296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.352327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.352700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.352728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.353166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.353195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.353533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.353561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.353962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.354014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.354388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.354416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.354774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.354803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.355167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.355196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.355566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.355594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.356035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.356064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.356422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.356449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.356808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.356837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.357105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.357134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.357525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.357554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.357913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.357941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.358314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.358344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.358586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.358618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.358995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.359026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.359435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.359463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.359857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.359885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.360242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.360271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.360617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.360645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.361004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.361034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.361425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.361455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.361833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.361860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.362211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.362242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.362602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.362637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.362872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.362899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.363198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.363228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.363541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.363569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.363927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.363955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.364323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.364353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.364725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.364752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.365171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.365200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.365528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.365556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.365915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.365944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.366311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.366342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.366702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.366730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.367076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.553 [2024-10-08 18:44:58.367106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.553 qpair failed and we were unable to recover it. 00:29:04.553 [2024-10-08 18:44:58.367470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.367498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.367846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.367875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.368139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.368169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.368413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.368441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.368799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.368827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.369168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.369197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.369553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.369582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.369936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.369964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.370361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.370390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.370730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.370759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.371128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.371158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.371498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.371528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.371898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.371926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.372291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.372320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.372678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.372708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.372832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.372862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.373199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.373229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.373605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.373632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.373999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.374028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.374396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.374425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.374775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.374803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.375167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.375196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.375552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.375581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.375944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.375971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.376333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.376361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.376721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.376749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.376909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.376936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.377299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.377335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.377679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.377716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.377945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.377998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.378414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.378443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.378804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.378833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.379205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.379234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.379606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.379636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.380065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.380094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.380427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.380456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.380821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.380849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.381210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.381239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.381602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.381630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.381987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.382016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.554 qpair failed and we were unable to recover it. 00:29:04.554 [2024-10-08 18:44:58.382384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.554 [2024-10-08 18:44:58.382413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.382693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.382721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.383128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.383158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.383401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.383433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.383803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.383831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.384194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.384224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.384585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.384613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.384885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.384913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.385269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.385298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.385658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.385687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.386055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.386085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.386510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.386538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.386899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.386928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.387303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.387332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.387706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.387735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.388098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.388127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.388499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.388527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.388922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.388950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.389404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.389436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.389778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.389807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.390175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.390205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.390565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.390594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.390948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.390983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.391351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.391380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.391735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.391764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.392119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.392148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.392502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.392530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.392898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.392932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.393309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.393339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.393695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.393724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.394098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.394128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.394487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.394515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.394888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.394916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.395351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.395381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.395732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.555 [2024-10-08 18:44:58.395761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.555 qpair failed and we were unable to recover it. 00:29:04.555 [2024-10-08 18:44:58.396025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.396054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.396315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.396342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.396714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.396742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.397109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.397138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.397538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.397566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.397896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.397924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.398314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.398344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.398709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.398737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.399104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.399133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.399502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.399530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.399883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.399911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.400263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.400292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.400650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.400679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.401043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.401072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.401315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.401346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.401712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.401741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.402101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.402131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.402391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.402424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.402800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.402830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.403242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.403272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.403496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.403524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.403768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.403797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.404164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.404192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.404514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.404542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.404910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.404939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.405304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.405333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.405639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.405669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.406029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.406059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.406422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.406451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.406713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.406741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.407094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.407123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.407490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.407518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.407872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.407906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.408273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.408302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.408662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.408690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.409075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.409105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.409478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.409506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.409880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.409908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.410271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.410300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.410660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.410688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.556 [2024-10-08 18:44:58.411053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.556 [2024-10-08 18:44:58.411082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.556 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.411441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.411470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.411718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.411746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.412162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.412192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.412549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.412577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.412951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.412987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.413288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.413317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.413673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.413701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.414070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.414100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.414454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.414484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.414741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.414769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.415118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.415148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.415520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.415549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.415879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.415908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.416253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.416283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.416639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.416669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.417031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.417060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.417490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.417518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.417887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.417915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.418317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.418349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.418694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.418724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.418971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.419009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.419355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.419385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.419527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.419559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.419919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.419948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.420317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.420346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.420712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.420740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.420992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.421021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.421390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.421419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.421785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.421813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.422240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.422270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.422638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.422666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.423024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.423059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.423397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.423427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.423803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.423831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.424212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.424242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.424668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.424696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.425057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.425087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.425448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.425476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.425819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.425847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.426217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.557 [2024-10-08 18:44:58.426247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.557 qpair failed and we were unable to recover it. 00:29:04.557 [2024-10-08 18:44:58.426615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.426643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.427011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.427042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.427401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.427429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.427794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.427822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.428072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.428105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.428488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.428518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.428927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.428955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.429392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.429422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.429770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.429800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.430176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.430206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.430546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.430575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.430909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.430937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.431308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.431337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.431781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.431809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.432170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.432199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.432542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.432571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.432951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.432986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.433362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.433392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.433768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.433798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.434137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.434167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.434386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.434417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.434751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.434781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.435144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.435173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.435533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.435562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.435926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.435954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.436340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.436369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.436727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.436755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.437110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.437140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.437512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.437540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.437894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.437922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.438330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.438361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.438699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.438735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.439068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.439098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.439405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.439435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.439669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.439697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.440052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.440082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.440429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.440457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.440819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.440848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.441209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.441238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.441593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.441623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.558 qpair failed and we were unable to recover it. 00:29:04.558 [2024-10-08 18:44:58.441985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.558 [2024-10-08 18:44:58.442016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.442268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.442300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.442663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.442693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.443021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.443050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.443415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.443444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.443806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.443835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.444205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.444234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.444600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.444628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.444993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.445023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.445383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.445410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.445762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.445790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.446158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.446189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.446477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.446505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.446884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.446913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.447279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.447309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.447663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.447692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.448038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.448068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.448427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.448455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.448815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.448849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.449223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.449253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.449615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.449643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.450076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.450107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.450466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.450495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.450853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.450882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.451227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.451256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.451625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.451655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.451904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.451934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.452220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.452251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.452600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.452629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.452868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.452897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.453123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.453154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.453535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.453566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.453936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.453965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.454363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.454394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.454783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.454813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.455188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.559 [2024-10-08 18:44:58.455218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.559 qpair failed and we were unable to recover it. 00:29:04.559 [2024-10-08 18:44:58.455598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.455627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.456090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.456120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.456477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.456505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.456854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.456883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.457233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.457263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.457619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.457648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.458014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.458043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.458403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.458431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.458808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.458837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.459202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.459233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.459598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.459626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.459852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.459883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.460259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.460289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.460707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.460735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.461110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.461140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.461525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.461554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.461863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.461894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.462055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.462092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.462464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.462493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.462751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.462780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.463150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.463181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.463549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.463578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.464033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.464069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.464411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.464441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.464703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.464731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.465104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.465134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.465507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.465535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.465888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.465916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.466308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.466338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.466691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.466719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.467081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.467110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.467487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.467515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.467895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.467923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.468294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.468324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.468611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.468640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.468953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.468992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.469156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.469190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.469553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.469582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.469962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.470014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.470375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.560 [2024-10-08 18:44:58.470403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.560 qpair failed and we were unable to recover it. 00:29:04.560 [2024-10-08 18:44:58.470757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.470785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.471151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.471181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.471545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.471573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.471932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.471960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.472334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.472362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.472724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.472752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.473016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.473049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.473351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.473379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.473740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.473769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.474019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.474050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.474293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.474322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.474730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.474759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.475116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.475145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.475509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.475537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.475989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.476019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.476363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.476391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.476762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.476790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.477184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.477214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.477417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.477445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.477755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.477783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.478169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.478199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.478554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.478582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.478961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.479005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.479383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.479412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.479726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.479754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.480128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.480157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.480542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.480571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.480956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.480994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.481218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.481246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.481587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.481617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.482015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.482044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.482380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.482409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.482793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.482821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.483245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.483275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.483511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.483538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.483777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.483804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.484195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.484226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.484569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.484598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.484968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.485009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.485272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.485300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.561 [2024-10-08 18:44:58.485514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.561 [2024-10-08 18:44:58.485543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.561 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.485892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.485921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.486289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.486320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.486564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.486592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.486969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.487011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.487388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.487416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.487796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.487824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.488276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.488308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.488587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.488615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.488989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.489020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.489380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.489408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.489738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.489767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.490130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.490159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.490523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.490552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.490812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.490844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.491209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.491238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.491668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.491697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.492098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.492128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.492491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.492520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.492762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.492791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.493182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.493211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.493580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.493608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.493968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.494013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.494273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.494304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.494671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.494702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.495127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.495158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.495443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.495470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.495841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.495869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.496300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.496329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.496674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.496703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.497063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.497092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.497458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.497486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.497724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.497755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.498142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.498172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.498550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.498578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.498935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.498963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.499355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.499385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.499756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.499784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.500211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.500241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.500601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.500629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.501015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.562 [2024-10-08 18:44:58.501043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.562 qpair failed and we were unable to recover it. 00:29:04.562 [2024-10-08 18:44:58.501375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.501403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.501786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.501815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.502058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.502087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.502370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.502397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.502776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.502804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.503268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.503297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.503657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.503686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.504031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.504061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.504440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.504469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.504829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.504858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.505135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.505164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.505549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.505577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.505801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.505830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.506207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.506237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.506595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.506623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.507055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.507085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.507341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.507369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.507738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.507767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.508151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.508180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.508527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.508556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.508898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.508927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.509292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.509328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.509683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.509711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.510098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.510128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.510534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.510562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.510922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.510952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.511385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.511415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.511832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.511861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.512231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.512262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.512620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.512649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.513020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.513049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.513430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.513459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.513812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.513840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.514241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.514270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.514706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.514735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.514955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.514994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.563 [2024-10-08 18:44:58.515373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.563 [2024-10-08 18:44:58.515402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.563 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.515773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.515801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.516168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.516197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.516588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.516616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.516986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.517014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.517368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.517396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.517754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.517785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.518132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.518161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.518525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.518553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.518913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.518940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.519335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.519365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.519729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.519757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.520118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.520148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.520547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.520576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.520940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.520967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.521334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.521364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.521772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.521800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.522210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.522239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.522595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.522623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.523012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.523042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.523275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.523303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.523647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.523677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.523914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.523941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.524290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.524320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.524688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.524716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.525096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.525132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.525470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.525500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.525854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.525883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.526222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.526251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.526460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.526488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.526759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.526788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.527138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.527168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.527536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.527564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.527922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.527959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.528342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.528371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.528720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.528748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.529126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.529154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.529509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.529537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.529894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.529922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.530363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.530392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.564 qpair failed and we were unable to recover it. 00:29:04.564 [2024-10-08 18:44:58.530638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.564 [2024-10-08 18:44:58.530666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.531055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.531085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.531483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.531512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.531862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.531891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.532235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.532264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.532629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.532657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.532983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.533012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.533389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.533417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.533788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.533817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.533989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.534023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.534373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.534400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.534728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.534758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.535124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.535154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.535504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.535532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.535916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.535944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.536210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.536239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.536603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.536631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.536892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.536921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.537288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.537317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.537675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.537704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.538106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.538136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.538519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.538546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.538905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.538933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.539320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.539350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.539755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.539784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.540185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.540221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.540596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.540625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.541029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.541058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.541450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.541479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.541830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.541859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.542215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.542244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.542689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.542717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.543061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.543091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.543467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.543495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.543848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.543877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.544124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.544152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.544500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.544529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.544900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.544928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.545290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.545321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.545728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.545756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.546127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.565 [2024-10-08 18:44:58.546158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.565 qpair failed and we were unable to recover it. 00:29:04.565 [2024-10-08 18:44:58.546495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.546523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.546883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.546912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.547266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.547295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.547658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.547687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.548006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.548037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.548359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.548388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.548757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.548787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.549027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.549056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.549498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.549526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.549887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.549914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.550348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.550378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.550645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.550677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.551042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.551072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.551451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.551479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.551864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.551892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.552281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.552310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.552681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.552708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.553073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.553102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.553465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.553495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.553874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.553903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.554247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.554277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.554627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.554657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.555098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.555127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.555470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.555497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.555875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.555908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.556134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.556166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.556501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.556529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.556779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.556807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.557185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.557215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.557574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.557601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.557942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.557970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.558339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.558367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.558745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.558772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.559141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.559171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.559532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.559561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.559926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.559953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.560347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.560376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.560718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.560747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.561007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.561042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.561445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.566 [2024-10-08 18:44:58.561473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.566 qpair failed and we were unable to recover it. 00:29:04.566 [2024-10-08 18:44:58.561832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.561859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.562225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.562255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.562606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.562634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.562996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.563026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.563395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.563425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.563784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.563812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.564066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.564099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.564504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.564535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.564899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.564926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.565284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.565313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.565681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.565709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.565988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.566021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.566393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.566422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.566777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.566806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.567155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.567184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.567546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.567574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.567924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.567951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.568296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.568325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.568689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.568717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.569086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.569115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.569488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.569516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.569879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.569907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.570256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.570284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.570637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.570665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.570987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.571026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.571401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.571429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.571798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.571826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.572106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.572135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.572475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.572503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.572765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.572793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.573170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.573200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.573563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.573591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.573956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.573999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.574358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.574386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.574751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.574780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.575146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.575176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.575515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.575544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.575793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.575820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.576158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.576188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.576548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.576576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.567 [2024-10-08 18:44:58.576831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.567 [2024-10-08 18:44:58.576858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.567 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.577231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.577260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.577624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.577652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.578025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.578056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.578434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.578462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.578829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.578857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.579111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.579140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.579498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.579526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.579940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.579968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.580390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.580420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.580777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.580806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.581138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.581169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.581507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.581536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.581878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.581906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.582201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.582229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.582613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.582642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.583009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.583039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.583402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.583431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.583778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.583807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.584177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.584207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.584554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.584581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.584819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.584851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.585221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.585251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.585659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.585690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.586045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.586083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.586435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.586464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.586833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.586862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.587208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.587239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.587611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.587639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.587985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.588016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.588365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.588394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.588757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.588786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.588990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.589023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.589396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.589424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.589794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.589823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.590216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.590248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.590472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.568 [2024-10-08 18:44:58.590502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.568 qpair failed and we were unable to recover it. 00:29:04.568 [2024-10-08 18:44:58.590778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.590806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.591161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.591191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.591544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.591574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.591931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.591960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.592312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.592340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.592703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.592730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.592987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.593017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.593367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.593395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.593764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.593792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.594164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.594194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.594549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.594577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.594920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.594948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.595365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.595394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.595672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.595699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.596071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.596103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.569 [2024-10-08 18:44:58.596456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.569 [2024-10-08 18:44:58.596485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.569 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.596827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.596858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.597216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.597248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.597618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.597646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.598007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.598036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.598401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.598429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.598774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.598802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.599182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.599212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.599573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.599602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.599999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.600029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.600260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.600290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.600653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.600682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.601040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.601076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.601321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.601352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.601718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.601746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.602103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.602134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.602523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.602550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.602913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.602942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.603246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.603276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.603624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.603654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.604015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.604044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.604431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.604458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.841 [2024-10-08 18:44:58.604798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.841 [2024-10-08 18:44:58.604827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.841 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.605207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.605236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.605620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.605648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.605990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.606020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.606404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.606432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.606722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.606749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.607100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.607130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.607496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.607524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.607952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.607992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.608289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.608317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.608678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.608706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.609065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.609095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.609462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.609490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.609855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.609884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.610274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.610303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.610704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.610731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.611099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.611127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.611510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.611539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.611905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.611932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.612319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.612348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.612709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.612738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.613095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.613124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.613571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.613599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.613841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.613868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.614237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.614266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.614628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.614656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.615103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.615133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.615497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.615525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.615896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.615924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.616302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.616330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.616715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.616749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.617084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.617114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.617484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.617512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.617878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.617906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.618275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.618304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.618669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.618697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.619038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.619067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.619422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.619450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.619812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.619841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.620228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.620257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.842 qpair failed and we were unable to recover it. 00:29:04.842 [2024-10-08 18:44:58.620603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.842 [2024-10-08 18:44:58.620630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.620999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.621028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.621378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.621406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.621751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.621778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.622120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.622151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.622518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.622547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.622912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.622939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.623308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.623338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.623576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.623604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.623972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.624014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.624347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.624382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.624622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.624653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.625019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.625049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.625445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.625473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.625831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.625859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.626235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.626264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.626642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.626670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.627042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.627071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.627432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.627460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.627822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.627850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.628197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.628227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.628593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.628621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.628994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.629023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.629381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.629409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.629785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.629813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.630090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.630119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.630355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.630384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.630796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.630825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.631152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.631182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.631547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.631575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.631908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.631938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.632315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.632346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.632711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.632740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.633001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.633034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.633393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.633422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.633794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.633822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.634217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.634248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.634494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.634525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.634875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.634903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.843 qpair failed and we were unable to recover it. 00:29:04.843 [2024-10-08 18:44:58.635266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.843 [2024-10-08 18:44:58.635295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.635676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.635705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.636059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.636088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.636447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.636475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.636832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.636860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.637233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.637263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.637622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.637650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.637776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.637803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.638149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.638177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.638545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.638572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.638948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.639013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.639376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.639406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.639774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.639802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.640167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.640196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.640570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.640597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.640951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.640990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.641321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.641349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.641714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.641743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.642115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.642152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.642512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.642540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.642841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.642868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.643233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.643263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.643637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.643668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.644004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.644035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.644387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.644417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.644757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.644787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.645111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.645142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.645517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.645545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.645922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.645952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.646327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.646357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.646731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.646759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.647125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.647154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.647508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.647537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.647903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.647932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.648295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.648324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.648676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.648705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.649077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.649107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.649477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.649507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.649867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.649896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.650279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.650310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.844 qpair failed and we were unable to recover it. 00:29:04.844 [2024-10-08 18:44:58.650642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.844 [2024-10-08 18:44:58.650673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.650917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.650952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.651354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.651386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.651750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.651779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.652151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.652181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.652534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.652563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.652995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.653026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.653383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.653412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.653776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.653807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.653970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.654013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.654445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.654474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.654884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.654913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.655287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.655318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.655662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.655690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.656035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.656065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.656441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.656469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.656836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.656863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.657217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.657247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.657607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.657649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.658007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.658036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.658393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.658422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.658779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.658808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.659184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.659216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.659627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.659656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.660013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.660043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.660424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.660453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.660791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.660819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.661178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.661208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.661442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.661475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.661827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.661857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.662252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.662281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.662645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.662672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.663048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.845 [2024-10-08 18:44:58.663078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.845 qpair failed and we were unable to recover it. 00:29:04.845 [2024-10-08 18:44:58.663419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.663448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.663855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.663883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.664227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.664266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.664594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.664622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.664992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.665021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.665391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.665420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.665793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.665822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.666069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.666098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.666466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.666494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.666876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.666905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.667244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.667274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.667630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.667659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.667912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.667940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.668332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.668362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.668737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.668765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.669031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.669062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.669431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.669460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.669678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.669709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.670063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.670093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.670352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.670381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.670589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.670618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.670992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.671022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.671400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.671429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.671779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.671807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.672150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.672179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.672546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.672581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.672935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.672962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.673336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.673365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.673725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.673753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.674003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.674035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.674453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.674481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.674815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.674843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.675209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.675239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.675601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.675629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.675995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.676025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.676279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.676307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.676662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.676691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.677073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.677122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.677496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.677524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.846 [2024-10-08 18:44:58.677890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.846 [2024-10-08 18:44:58.677918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.846 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.678299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.678329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.678706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.678734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.679096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.679125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.679501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.679529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.679898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.679927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.680263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.680292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.680658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.680687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.681042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.681071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.681434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.681463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.681705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.681736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.682101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.682132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.682559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.682587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.683047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.683077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.683445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.683474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.683842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.683870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.684251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.684281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.684636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.684663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.685021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.685050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.685425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.685454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.685705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.685733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.685994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.686023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.686388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.686416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.686788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.686817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.687154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.687183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.687546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.687573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.687938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.687971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.688346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.688374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.688730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.688757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.689112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.689141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.689518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.689546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.689912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.689939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.690199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.690231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.690609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.690638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.691002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.691033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.691407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.847 [2024-10-08 18:44:58.691436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.847 qpair failed and we were unable to recover it. 00:29:04.847 [2024-10-08 18:44:58.691764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.691791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.692172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.692201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.692557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.692586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.692955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.692993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.693246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.693279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.693658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.693687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.693913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.693943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.694359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.694389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.694747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.694776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.695034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.695064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.695431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.695459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.695822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.695849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.696204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.696233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.696459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.696489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.696844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.696872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.697215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.697246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.697625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.697653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.698020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.698051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.698324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.698352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.698600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.698632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.699002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.699033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.699448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.699477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.699844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.699874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.700230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.700259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.700616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.700644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.701020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.701051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.701415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.701444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.701818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.701847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.702201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.702231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.702588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.702617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.702996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.703032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.703375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.703403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.703770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.703799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.704152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.704181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.704544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.848 [2024-10-08 18:44:58.704573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.848 qpair failed and we were unable to recover it. 00:29:04.848 [2024-10-08 18:44:58.704942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.704970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.705345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.705374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.705629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.705657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.706057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.706087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.706458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.706487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.706848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.706876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.707232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.707263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.707622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.707650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.708016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.708044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.708438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.708469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.708840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.708869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.709216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.709246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.709629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.709658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.710015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.710044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.710406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.710434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.710811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.710839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.711136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.711165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.711537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.711564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.711887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.711917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.712294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.712323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.712686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.712715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.713082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.713111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.713476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.713505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.713875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.713903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.714277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.714307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.714559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.714588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.714949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.714986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.715324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.715352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.715725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.715753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.716120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.716150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.716513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.716540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.716806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.716835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.717068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.717097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.717472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.717500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.717753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.717782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.718107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.718143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.718540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.718569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.718807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.718838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.719254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.719284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.849 [2024-10-08 18:44:58.719666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.849 [2024-10-08 18:44:58.719695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.849 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.720073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.720102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.720476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.720505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.720875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.720903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.721214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.721244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.721620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.721648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.721995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.722025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.722391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.722419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.722660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.722691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.723055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.723085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.723433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.723463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.723720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.723749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.724186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.724215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.724589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.724616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.725027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.725057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.725430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.725458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.725663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.725692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.725911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.725942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.726365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.726395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.726762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.726790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.727159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.727188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.727432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.727461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.727807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.727836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.728249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.728278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.728638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.728665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.728806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.728833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.729196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.729226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.729597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.729626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.729998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.730028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.730398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.730427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.730792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.730821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.731175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.731206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.731429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.731460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.731738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.731768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.732121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.732151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.850 qpair failed and we were unable to recover it. 00:29:04.850 [2024-10-08 18:44:58.732487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.850 [2024-10-08 18:44:58.732524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.732855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.732895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.733029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.733060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.733363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.733392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.733790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.733819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.734180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.734210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.734454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.734483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.734868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.734897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.735167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.735199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.735556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.735585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.735931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.735960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.736328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.736357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.736715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.736744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.737103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.737132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.737503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.737531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.737904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.737932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.738352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.738381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.738655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.738683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.739044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.739074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.739458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.739485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.739830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.739857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.740127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.740157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.740498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.740526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.740906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.740934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.741196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.741229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.741447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.741475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.741859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.741887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.742260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.742291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.742709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.742738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.743111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.743141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.743523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.743550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.743914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.743942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.744287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.744317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.744692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.744720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.745085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.745115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.745451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.745479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.745635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.745662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.746041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.746070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.746292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.746319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.851 qpair failed and we were unable to recover it. 00:29:04.851 [2024-10-08 18:44:58.746700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.851 [2024-10-08 18:44:58.746729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.747106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.747135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.747510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.747543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.747882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.747910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.748262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.748292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.748680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.748708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.749079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.749108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.749489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.749517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.749884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.749912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.750143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.750173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.750544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.750571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.750958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.751007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.751267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.751295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.751674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.751702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.752074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.752104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.752470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.752498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.752848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.752878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.753169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.753199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.753588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.753616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.753996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.754025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.754272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.754301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.754691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.754720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.755093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.755123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.755488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.755517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.755743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.755771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.756144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.756173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.756530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.756559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.756800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.756828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.757207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.757236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.757458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.757486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.757701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.757733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.758100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.758131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.758494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.758522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.758750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.758778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.759134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.759162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.759506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.759535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.759891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.759919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.760267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.760296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.852 [2024-10-08 18:44:58.760661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.852 [2024-10-08 18:44:58.760688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.852 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.761030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.761060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.761507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.761535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.761779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.761806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.762167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.762202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.762565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.762594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.762834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.762867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.763227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.763257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.763619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.763648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.763888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.763917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.764193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.764223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.764576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.764605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.764982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.765012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.765428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.765456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.765825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.765853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.766218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.766246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.766495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.766527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.766863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.766892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.767122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.767151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.767585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.767614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.767983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.768013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.768376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.768404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.768756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.768784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.769124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.769154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.769522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.769550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.769914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.769942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.770397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.770428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.770677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.770706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.771049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.771079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.771437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.771464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.771821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.771849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.772212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.772243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.772588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.772617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.772985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.773016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.773397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.773425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.773785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.773813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.774192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.774222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.774603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.774631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.775000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.775030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.853 [2024-10-08 18:44:58.775279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.853 [2024-10-08 18:44:58.775307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.853 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.775645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.775674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.776040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.776070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.776444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.776472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.776835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.776863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.777103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.777138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.777333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.777364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.777685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.777714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.778057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.778086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.778443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.778472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.778803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.778833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.779144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.779173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.779494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.779524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.779855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.779883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.780229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.780259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.780624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.780653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.781020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.781049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.781463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.781491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.781735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.781764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.782201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.782231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.782592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.782622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.782956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.782995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.783235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.783267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.783687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.783715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.784067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.784098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.784454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.784484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.784840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.784868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.785232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.785261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.785639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.785667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.786034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.786063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.786432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.786460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.786811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.786839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.787184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.787214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.787581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.787609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.787854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.787885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.788287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.854 [2024-10-08 18:44:58.788317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.854 qpair failed and we were unable to recover it. 00:29:04.854 [2024-10-08 18:44:58.788565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.788593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.788969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.789007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.789379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.789407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.789777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.789806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.790151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.790180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.790596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.790624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.790960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.790997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.791354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.791382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.791751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.791779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.792139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.792175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.792513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.792541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.792899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.792928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.793289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.793318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.793662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.793692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.794048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.794079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.794461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.794489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.794855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.794883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.795254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.795283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.795661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.795688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.796051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.796080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.796449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.796477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.796811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.796840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.797069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.797100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.797466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.797497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.797866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.797894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.798262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.798293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.798544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.798574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.798908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.798937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.799311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.799342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.799710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.799737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.800097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.800127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.800474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.800502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.800860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.800888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.801255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.801284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.801649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.801675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.802029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.802057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.802426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.802454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.802817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.802844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.803113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.855 [2024-10-08 18:44:58.803142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.855 qpair failed and we were unable to recover it. 00:29:04.855 [2024-10-08 18:44:58.803506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.803536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.803906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.803935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.804330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.804362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.804727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.804756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.805119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.805150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.805382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.805415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.805660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.805689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.805928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.805957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.806207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.806239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.806588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.806618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.806990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.807034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.807425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.807455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.807850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.807879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.808007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.808039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.808361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.808390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.808698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.808727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.808952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.808992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.809368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.809397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.809761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.809790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.810248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.810279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.810598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.810628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.811029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.811060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.811433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.811463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.811833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.811862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.812234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.812265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.812639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.812669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.812999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.813030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.813381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.813411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.813775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.813804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.814168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.814199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.814454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.814484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.814850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.814878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.815284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.815314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.815645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.815674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.815908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.815938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.816330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.816360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.816717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.816747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.817117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.817148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.817519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.817548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.817787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.856 [2024-10-08 18:44:58.817816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.856 qpair failed and we were unable to recover it. 00:29:04.856 [2024-10-08 18:44:58.818194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.818223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.818575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.818604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.818822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.818851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.819172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.819203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.819465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.819495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.819855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.819884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.820224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.820255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.820602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.820631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.821030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.821059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.821316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.821343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.821703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.821731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.822104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.822134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.822381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.822409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.822784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.822812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.823210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.823239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.823607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.823635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.823987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.824017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.824386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.824414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.824756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.824784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.825142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.825172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.825531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.825560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.825920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.825948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.826326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.826355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.826757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.826785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.827143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.827173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.827509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.827538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.827899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.827927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.828370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.828400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.828755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.828782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.829157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.829186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.829423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.829454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.829677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.829708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.829930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.829962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.830345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.830374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.830739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.830768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.831132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.831162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.831537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.831565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.831919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.831957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.832243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.832274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.832626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.857 [2024-10-08 18:44:58.832655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.857 qpair failed and we were unable to recover it. 00:29:04.857 [2024-10-08 18:44:58.833011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.833040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.833467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.833496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.833856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.833884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.834225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.834254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.834616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.834645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.835012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.835042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.835304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.835332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.835680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.835708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.836056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.836095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.836439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.836466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.836842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.836870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.837321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.837351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.837711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.837738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.838111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.838141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.838509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.838537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.838896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.838924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.839302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.839332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.839694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.839723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.840082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.840111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.840486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.840515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.840882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.840910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.841213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.841242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.841588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.841617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.841995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.842025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.842379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.842407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.842780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.842809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.843168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.843198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.843556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.843585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.843947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.844002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.844396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.844426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.844782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.844810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.845153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.845183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.845547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.845575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.845939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.845967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.846327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.846356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.846705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.846733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.847178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.847210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.847567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.847602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.847986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.858 [2024-10-08 18:44:58.848018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.858 qpair failed and we were unable to recover it. 00:29:04.858 [2024-10-08 18:44:58.848382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.848410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.848773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.848804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.849145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.849175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.849542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.849571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.849948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.849989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.850242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.850271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.850625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.850653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.850902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.850930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.851310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.851340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.851630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.851659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.852030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.852060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.852480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.852509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.852877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.852906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.853267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.853297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.853659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.853687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.854059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.854088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.854456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.854487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.854740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.854769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.855147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.855177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.855550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.855579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.855891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.855920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.856339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.856369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.856603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.856631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.857007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.857038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.857392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.857421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.857797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.857827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.858081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.858111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.858480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.858508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.858745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.858772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.859136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.859166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.859526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.859554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.859926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.859954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.860316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.860346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.860701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.860729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.861070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.861099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.859 qpair failed and we were unable to recover it. 00:29:04.859 [2024-10-08 18:44:58.861455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.859 [2024-10-08 18:44:58.861485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.861850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.861877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.862223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.862254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.862615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.862650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.862995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.863026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.863342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.863371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.863621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.863648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.864006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.864036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.864382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.864419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.864757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.864784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.865037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.865069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.865441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.865470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.865684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.865716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.865957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.866090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.866470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.866498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.866925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.866953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.867241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.867271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.867642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.867672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.868035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.868066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.868421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.868450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.868794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.868823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.869044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.869077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.869447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.869477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.869632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.869661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.870028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.870059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.870419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.870450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.870797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.870828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.871091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.871122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.871470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.871500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.871646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.871678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.872036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.872066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.872444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.872473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.872840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.872868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.873120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.873149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.873517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.873546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.873904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.873932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.874329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.874359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.874713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.874742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.875005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.875034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.860 [2024-10-08 18:44:58.875403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.860 [2024-10-08 18:44:58.875432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.860 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.875801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.875832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.876195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.876229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.876596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.876625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.877001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.877037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.877407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.877435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.877677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.877705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.878060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.878089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.878450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.878478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.878847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.878876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.879238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.879267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.879630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.879658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.880020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.880049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.880407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.880437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.880814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.880842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.881218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.881248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.881614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.881642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.882013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.882043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.882436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.882466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.882833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.882861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.883221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.883251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.883612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.883643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.884012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.884043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.884504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.884532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.884966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.885008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.885381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.885412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.885773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.885801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.886159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.886189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:04.861 [2024-10-08 18:44:58.886543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.861 [2024-10-08 18:44:58.886572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:04.861 qpair failed and we were unable to recover it. 00:29:05.133 [2024-10-08 18:44:58.886937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.133 [2024-10-08 18:44:58.886970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.133 qpair failed and we were unable to recover it. 00:29:05.133 [2024-10-08 18:44:58.887328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.133 [2024-10-08 18:44:58.887357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.133 qpair failed and we were unable to recover it. 00:29:05.133 [2024-10-08 18:44:58.887710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.133 [2024-10-08 18:44:58.887739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.133 qpair failed and we were unable to recover it. 00:29:05.133 [2024-10-08 18:44:58.888098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.133 [2024-10-08 18:44:58.888130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.133 qpair failed and we were unable to recover it. 00:29:05.133 [2024-10-08 18:44:58.888480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.133 [2024-10-08 18:44:58.888507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.133 qpair failed and we were unable to recover it. 00:29:05.133 [2024-10-08 18:44:58.888835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.133 [2024-10-08 18:44:58.888863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.133 qpair failed and we were unable to recover it. 00:29:05.133 [2024-10-08 18:44:58.889205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.133 [2024-10-08 18:44:58.889237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.133 qpair failed and we were unable to recover it. 00:29:05.133 [2024-10-08 18:44:58.889571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.889598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.889893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.889921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.890208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.890238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.890594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.890623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.890765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.890796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.891039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.891071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.891448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.891477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.891831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.891860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.892211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.892252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.892650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.892680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.893050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.893080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.893438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.893468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.893832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.893860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.894096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.894126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.894483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.894512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.894872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.894901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.895258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.895287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.895407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.895437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.895722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.895753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.896112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.896142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.896379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.896408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.896773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.896802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.897182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.897212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.897573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.897601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.897947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.897987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.898350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.898379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.898753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.898781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.899122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.899151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.899516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.899544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.899882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.899909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.900267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.900297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.900657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.900685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.901097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.901127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.901419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.901446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.901809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.901838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.902178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.902208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.902543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.902573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.902940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.902968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.903380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.903409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.134 qpair failed and we were unable to recover it. 00:29:05.134 [2024-10-08 18:44:58.903772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.134 [2024-10-08 18:44:58.903801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.904041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.904073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.904488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.904517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.904886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.904915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.905277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.905307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.905538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.905567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.905930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.905959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.906333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.906363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.906774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.906802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.907047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.907083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.907441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.907470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.907854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.907882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.908263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.908293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.908637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.908666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.909038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.909068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.909416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.909445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.909697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.909727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.910186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.910216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.910467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.910494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.910862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.910890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.911114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.911142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.911527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.911555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.911918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.911947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.912250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.912280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.912502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.912530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.912942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.912970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.913325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.913354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.913721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.913749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.914107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.914138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.914498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.914527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.914921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.914949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.915343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.915372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.915751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.915779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.916210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.916240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.916495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.916526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.916769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.916801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.917032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.917066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.917409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.917437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.917805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.917834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.918196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.918225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.135 qpair failed and we were unable to recover it. 00:29:05.135 [2024-10-08 18:44:58.918610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.135 [2024-10-08 18:44:58.918638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.918948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.918988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.919369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.919397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.919768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.919796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.920164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.920193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.920550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.920578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.920942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.920969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.921315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.921344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.921708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.921737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.922093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.922129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.922436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.922464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.922700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.922731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.922963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.923006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.923413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.923441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.923800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.923830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.924197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.924227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.924604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.924632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.924997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.925027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.925386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.925414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.925789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.925817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.926193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.926222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.926581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.926612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.926985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.927014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.927382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.927410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.927783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.927812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.928177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.928206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.928581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.928610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.928947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.928985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.929395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.929423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.929671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.929702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.930050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.930080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.930447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.930475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.930849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.930878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.931116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.931147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.931393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.931421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.931765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.931794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.932156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.932188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.932634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.932665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.933030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.933060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.933416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.933444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.136 [2024-10-08 18:44:58.933805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.136 [2024-10-08 18:44:58.933833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.136 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.934199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.934228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.934594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.934622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.934913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.934941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.935323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.935354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.935709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.935737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.936103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.936132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.936368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.936396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.936738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.936765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.937138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.937174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.937535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.937565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.937999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.938029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.938395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.938423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.938668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.938700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.938949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.938993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.939409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.939438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.939802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.939841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.940202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.940232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.940566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.940595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.940936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.940964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.941196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.941227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.941598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.941628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.942029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.942058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.942408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.942438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.942831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.942858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.943203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.943233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.943461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.943492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.943864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.943894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.944259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.944288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.944659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.944687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.944940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.944967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.945375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.945403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.945761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.945789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.946157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.946186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.946540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.946568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.946944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.946972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.947365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.947395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.947790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.947819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.948171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.948204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.948563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.948592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.948821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.137 [2024-10-08 18:44:58.948849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.137 qpair failed and we were unable to recover it. 00:29:05.137 [2024-10-08 18:44:58.949226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.949256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.949628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.949658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.949932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.949961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.950338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.950367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.950718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.950747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.951115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.951145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.951529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.951557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.951902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.951930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.952333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.952369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.952706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.952735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.953105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.953135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.953484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.953522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.953855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.953883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.954283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.954313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.954551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.954582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.954825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.954853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.955274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.955304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.955656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.955684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.956120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.956151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.956530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.956558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.956934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.956962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.957334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.957362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.957778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.957808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.958058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.958091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.958468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.958497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.958852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.958880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.959236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.959265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.959625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.959653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.960013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.960044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.960410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.960438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.960810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.960839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.961192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.961222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.961389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.138 [2024-10-08 18:44:58.961417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.138 qpair failed and we were unable to recover it. 00:29:05.138 [2024-10-08 18:44:58.961839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.961867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.962204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.962233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.962585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.962615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.962972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.963013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.963436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.963465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.963835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.963863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.964220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.964252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.964626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.964654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.965032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.965062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.965448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.965476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.965823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.965851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.966129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.966158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.966509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.966538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.966907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.966936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.967312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.967342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.967592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.967627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.967968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.968024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.968273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.968302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.968638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.968666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.968902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.968932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.969307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.969338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.969703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.969732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.970102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.970131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.970382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.970410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.970771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.970798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.971197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.971229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.971587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.971617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.971878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.971908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.972264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.972294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.972663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.972692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.973062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.973091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.973349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.973378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.973732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.973761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.974153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.974184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.974575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.974602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.974994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.975023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.975282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.975310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.975685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.975713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.976089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.976120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.976509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.139 [2024-10-08 18:44:58.976537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.139 qpair failed and we were unable to recover it. 00:29:05.139 [2024-10-08 18:44:58.976901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.976930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.977289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.977318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.977696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.977726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.978138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.978168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.978553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.978580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.978944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.978982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.979239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.979267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.979642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.979671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.980034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.980063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.980501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.980529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.980800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.980828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.981163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.981193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.981544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.981572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.981920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.981948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.982315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.982345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.982706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.982741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.983111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.983142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.983497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.983527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.983883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.983912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.984274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.984304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.984554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.984582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.984953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.984995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.985406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.985435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.985663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.985691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.986060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.986090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.986442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.986471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.986726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.986754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.987147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.987179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.987547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.987576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.987940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.987969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.988360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.988390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.988754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.988783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.989036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.989066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.989343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.989371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.989752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.989780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.990022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.990055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.990448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.990477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.990840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.990868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.991091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.991121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.140 [2024-10-08 18:44:58.991441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.140 [2024-10-08 18:44:58.991469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.140 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.991826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.991854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.992245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.992275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.992527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.992556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.992944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.992984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.993341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.993370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.993584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.993611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.994048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.994078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.994437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.994466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.994832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.994861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.995214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.995245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.995579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.995606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.995952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.995991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.996340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.996368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.996620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.996648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.997097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.997128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.997374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.997408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.997800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.997829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.998074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.998107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.998372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.998400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.998776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.998804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.999057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.999086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.999444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.999474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:58.999814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:58.999843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.000200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.000230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.000594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.000623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.000988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.001017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.001388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.001416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.001763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.001792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.002181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.002212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.002572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.002600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.002987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.003017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.003278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.003309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.003567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.003595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.003836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.003865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.004220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.004251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.004627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.004656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.005027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.005057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.005441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.005470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.005839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.005868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.141 [2024-10-08 18:44:59.006129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.141 [2024-10-08 18:44:59.006158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.141 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.006548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.006577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.006919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.006949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.007291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.007322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.007702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.007732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.007958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.007997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.008336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.008364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.008682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.008709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.009107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.009137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.009295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.009322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.009550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.009578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.010002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.010032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.010384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.010415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.010781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.010812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.011182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.011211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.011575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.011603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.011996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.012025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.012380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.012410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.012775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.012804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.013026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.013058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.013329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.013358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.013616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.013644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.014016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.014046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.014371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.014401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.014744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.014772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.014938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.014966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.015319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.015348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.015710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.015738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.016119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.016148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.016498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.016527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.016906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.016934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.017241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.017270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.017523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.017552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.017909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.017938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.018328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.018358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.018789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.018818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.019165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.142 [2024-10-08 18:44:59.019195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.142 qpair failed and we were unable to recover it. 00:29:05.142 [2024-10-08 18:44:59.019580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.019608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.019956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.019995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.020342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.020370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.020616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.020644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.020901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.020930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.021290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.021320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.021575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.021608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.021958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.022013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.022279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.022307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.022683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.022712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.022954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.022998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.023387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.023416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.023796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.023824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.024171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.024202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.024569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.024597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.024986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.025015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.025477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.025506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.025820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.025847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.026203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.026233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.026594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.026624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.026991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.027022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.027369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.027398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.027761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.027789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.028145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.028175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.028531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.028559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.028922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.028950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.029340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.029370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.029741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.029770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.030140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.030169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.030516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.030545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.030908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.030936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.031294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.031324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.031568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.031597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.031962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.032004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.032334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.032363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.032612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.032641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.032993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.033024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.033212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.033240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.033599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.033627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.143 [2024-10-08 18:44:59.034014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.143 [2024-10-08 18:44:59.034044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.143 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.034404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.034433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.034798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.034826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.035193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.035222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.035601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.035630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.035971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.036010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.036284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.036312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.036536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.036574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.036926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.036954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.037216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.037246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.037586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.037616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.037995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.038026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.038399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.038427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.038784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.038812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.039175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.039205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.039566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.039595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.039960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.040004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.040335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.040364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.040736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.040765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.041123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.041154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.041520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.041548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.041914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.041943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.042362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.042392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.042746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.042774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.043028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.043058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.043421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.043450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.043792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.043820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.044179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.044209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.044652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.044681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.045046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.045075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.045441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.045468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.045827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.045854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.046201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.046231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.046537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.046566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.046923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.046952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.047231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.047260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.047620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.144 [2024-10-08 18:44:59.047648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.144 qpair failed and we were unable to recover it. 00:29:05.144 [2024-10-08 18:44:59.048077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.048106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.048436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.048465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.048796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.048826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.049088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.049117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.049460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.049489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.049858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.049886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.050141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.050173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.050544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.050573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.050934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.050963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.051384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.051414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.051772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.051807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.052141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.052171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.052533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.052561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.052924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.052953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.053283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.053312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.053685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.053714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.054084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.054113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.054473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.054502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.054883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.054911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.055273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.055302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.055667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.055696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.056057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.056087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.056463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.056491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.056802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.056832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.057209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.057239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.057596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.057624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.057996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.058025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.058278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.058310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.058645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.058674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.059049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.059078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.059518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.059546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.059907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.059936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.060297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.060327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.060699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.060727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.061086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.061115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.061470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.061498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.061862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.061890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.062255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.062286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.062618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.062648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.145 [2024-10-08 18:44:59.063017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.145 [2024-10-08 18:44:59.063046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.145 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.063432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.063461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.063832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.063860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.064233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.064261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.064598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.064626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.064861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.064889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.065123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.065155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.065520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.065549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.065915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.065943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.066281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.066310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.066648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.066677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.067035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.067071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.067433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.067461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.067798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.067826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.068198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.068229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.068582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.068610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.068967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.069009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.069366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.069395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.069739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.069767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.070128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.070158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.070524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.070552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.070795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.070823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.071163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.071192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.071559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.071590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.071950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.071989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.072389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.072418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.072783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.072811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.073156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.073187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.073553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.073582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.073944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.073972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.074313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.074343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.074706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.074735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.074992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.075022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.075280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.075308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.075746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-10-08 18:44:59.075775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.146 qpair failed and we were unable to recover it. 00:29:05.146 [2024-10-08 18:44:59.076194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.076223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.076577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.076607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.076952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.077003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.077251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.077279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.077628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.077658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.078022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.078052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.078434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.078462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.078810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.078838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.079198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.079227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.079583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.079611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.079988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.080017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.080376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.080406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.080776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.080804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.081162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.081192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.081559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.081587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.081953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.082003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.082382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.082415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.082703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.082733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.082963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.083009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.083271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.083302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.083694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.083723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.084067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.084096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.084464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.084492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.084728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.084757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.085005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.085035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.085394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.085423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.085649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.085679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.086032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.086063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.086476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.086504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.086878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.086907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.087273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.087303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.087648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.087677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.088040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.088071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.088436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.088464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.088690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.147 [2024-10-08 18:44:59.088720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-10-08 18:44:59.089075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.089106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.089484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.089513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.089884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.089914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.090260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.090290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.090655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.090684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.091041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.091070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.091434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.091463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.091807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.091837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.092238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.092267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.092594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.092621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.092857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.092889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.093193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.093223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.093609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.093638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.093952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.093992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.094250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.094279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.094683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.094711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.095078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.095107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.095492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.095521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.095887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.095917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.096297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.096327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.096690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.096720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.097091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.097126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.097501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.097530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.097805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.097834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.098202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.098234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.098658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.098686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.099016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.099046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.099302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.099330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.099662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.099691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.100054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.100083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.100515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.100544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.100908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.100936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.101301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.101330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.101681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.101709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.102060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.102090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.102455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.102483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.102842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.102871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.103229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.103260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.103626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.103654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-10-08 18:44:59.104012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.148 [2024-10-08 18:44:59.104042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.104403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.104433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.104676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.104707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.105097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.105128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.105522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.105551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.105822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.105850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.106221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.106251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.106612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.106641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.107005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.107034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.107390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.107419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.107760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.107788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.108080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.108110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.108481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.108509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.108876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.108903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.109241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.109270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.109643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.109671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.110046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.110075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.110471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.110499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.110866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.110894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.111339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.111370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.111707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.111735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.112092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.112121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.112552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.112587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.112836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.112864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.113186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.113215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.113583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.113611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.113948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.113995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.114347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.114376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.114736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.114765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.115129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.115159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.115526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.115554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.115919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.115948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.116326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.116355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.116561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.116589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.116815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.116844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.117243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.117273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-10-08 18:44:59.117622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.149 [2024-10-08 18:44:59.117652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.118018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.118048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.118439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.118467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.118821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.118849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.119096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.119125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.119479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.119507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.119863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.119890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.120279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.120309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.120670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.120698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.121039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.121068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.121457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.121487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.121855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.121885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.122289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.122320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.122673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.122702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.122936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.122965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.123387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.123416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.123764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.123794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.124058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.124092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.124479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.124507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.124875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.124904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.125249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.125278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.125653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.125683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.126035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.126064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.126437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.126467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.126716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.126748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.127110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.127141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.127481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.127515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.127873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.127901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.128283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.128311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.128659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.128689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.129054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.129083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.129440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.129468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.129846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.129874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.130217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.130248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.130614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.130642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.131011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.131042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.131415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.131443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.131795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.150 [2024-10-08 18:44:59.131823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.150 qpair failed and we were unable to recover it. 00:29:05.150 [2024-10-08 18:44:59.132195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.132225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.132589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.132617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.132992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.133022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.133268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.133296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.133652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.133689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.134021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.134051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.134436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.134465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.134865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.134894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.135165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.135193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.135563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.135593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.135948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.135988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.136343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.136372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.136656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.136685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.136923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.136955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.137206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.137236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.137595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.137625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.137999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.138030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.138409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.138437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.138775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.138804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.139186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.139216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.139579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.139607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.139967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.140007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.140359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.140388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.140751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.140778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.141157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.141185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.141551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.141579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.141944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.141981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.142358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.142386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.142754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.142789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.143011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.143044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.143418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.143448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.143782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.143812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.144154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.144183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.144548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.144579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.151 qpair failed and we were unable to recover it. 00:29:05.151 [2024-10-08 18:44:59.144932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.151 [2024-10-08 18:44:59.144961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.145334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.145364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.145592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.145622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.145998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.146028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.146393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.146421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.146784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.146813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.147159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.147188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.147563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.147591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.147953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.147998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.148222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.148253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.148612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.148640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.148889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.148917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.149038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.149068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.149427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.149455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.149833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.149862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.150229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.150257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.150605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.150633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.151000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.151031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.151395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.151423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.151776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.151804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.152153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.152184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.152543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.152571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.152938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.152966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.153334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.153364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.153725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.153754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.154127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.154157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.154533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.154561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.154914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.154941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.155306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.155335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.155699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.155728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.156098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.156129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.156504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.156533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.156945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.156973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.157277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.157305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.157665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.157698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.158041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.158070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.158439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.158469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.158838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.158866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.159234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.152 [2024-10-08 18:44:59.159263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.152 qpair failed and we were unable to recover it. 00:29:05.152 [2024-10-08 18:44:59.159694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.159723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.160090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.160120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.160463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.160492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.160853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.160881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.161235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.161263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.161640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.161669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.162035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.162064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.162433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.162462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.162813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.162841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.163199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.163229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.163469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.163502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.163855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.163884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.164271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.164300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.164659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.164690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.164932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.164964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.165368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.165398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.165745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.165774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.166141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.166170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.166606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.166634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.167044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.167074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.167450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.167478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.167843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.167872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.168231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.168262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.168609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.168638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.169003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.169032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.169390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.169419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.169672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.169702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.170050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.170080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.170448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.170477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.170859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.170887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.171223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.171253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.171489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.171520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.171876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.171905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.172273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.172302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.172664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.172692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.173050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.173088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.173461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.173490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.173846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.173876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.175781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.153 [2024-10-08 18:44:59.175850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.153 qpair failed and we were unable to recover it. 00:29:05.153 [2024-10-08 18:44:59.176221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.154 [2024-10-08 18:44:59.176259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.154 qpair failed and we were unable to recover it. 00:29:05.154 [2024-10-08 18:44:59.176633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.154 [2024-10-08 18:44:59.176662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.154 qpair failed and we were unable to recover it. 00:29:05.154 [2024-10-08 18:44:59.177019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.154 [2024-10-08 18:44:59.177049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.154 qpair failed and we were unable to recover it. 00:29:05.154 [2024-10-08 18:44:59.177386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.154 [2024-10-08 18:44:59.177416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.154 qpair failed and we were unable to recover it. 00:29:05.154 [2024-10-08 18:44:59.177785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.154 [2024-10-08 18:44:59.177813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.154 qpair failed and we were unable to recover it. 00:29:05.154 [2024-10-08 18:44:59.178167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.154 [2024-10-08 18:44:59.178198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.154 qpair failed and we were unable to recover it. 00:29:05.154 [2024-10-08 18:44:59.178565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.154 [2024-10-08 18:44:59.178593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.154 qpair failed and we were unable to recover it. 00:29:05.154 [2024-10-08 18:44:59.178958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.154 [2024-10-08 18:44:59.179002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.154 qpair failed and we were unable to recover it. 00:29:05.154 [2024-10-08 18:44:59.179356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.154 [2024-10-08 18:44:59.179385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.154 qpair failed and we were unable to recover it. 00:29:05.425 [2024-10-08 18:44:59.179743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.425 [2024-10-08 18:44:59.179773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-10-08 18:44:59.180189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.425 [2024-10-08 18:44:59.180219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-10-08 18:44:59.180585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.425 [2024-10-08 18:44:59.180615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-10-08 18:44:59.180967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.425 [2024-10-08 18:44:59.181026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-10-08 18:44:59.182843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.425 [2024-10-08 18:44:59.182904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-10-08 18:44:59.183289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.425 [2024-10-08 18:44:59.183325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-10-08 18:44:59.183689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.425 [2024-10-08 18:44:59.183718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-10-08 18:44:59.184054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.425 [2024-10-08 18:44:59.184084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-10-08 18:44:59.184445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.425 [2024-10-08 18:44:59.184473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-10-08 18:44:59.184838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.184866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.185223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.185253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.185617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.185646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.185970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.186011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.186286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.186314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.186687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.186716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.187173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.187204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.187569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.187597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.187968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.188012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.188407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.188435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.188788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.188817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.189060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.189091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.189473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.189501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.189841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.189869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.190248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.190278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.190636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.190664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.191095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.191124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.191456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.191486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.191829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.191863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.192292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.192322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.192673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.192703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.193081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.193110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.193485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.193513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.193874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.193903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.194255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.194284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.194652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.194682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.195045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.195074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.195442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.195471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.195802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.195829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.196205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.196234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.196678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.196707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.196962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.197071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.197431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.197460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.197793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.197822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.198199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.198228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.198599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.198627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.198995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.199024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.199379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.199408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.199775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.199803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.200155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.426 [2024-10-08 18:44:59.200183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-10-08 18:44:59.200534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.200563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.200945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.200973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.201401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.201431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.201844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.201873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.202230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.202260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.202621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.202651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.203006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.203036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.203395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.203425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.203783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.203811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.204167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.204197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.204560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.204588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.204846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.204875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.205230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.205262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.205505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.205534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.205931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.205960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.206365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.206394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.206749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.206778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.207122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.207152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.207496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.207524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.207883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.207911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.208312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.208342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.208693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.208723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.209095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.209124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.209492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.209519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.209948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.209991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.210328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.210357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.210718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.210746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.211110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.211139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.211479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.211508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.211862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.211890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.212242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.212273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.212678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.212708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.213084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.213117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.213498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.213528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.213890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.213919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.214284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.214315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.214753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.214783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.215142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.215173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.215585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.215615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.215766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.427 [2024-10-08 18:44:59.215801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.427 qpair failed and we were unable to recover it. 00:29:05.427 [2024-10-08 18:44:59.216185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.216215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.216532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.216561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.216916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.216947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.217235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.217269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.217537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.217567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.217932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.217971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.218346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.218376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.218742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.218771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.219137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.219168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.219449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.219478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.219856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.219886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.220258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.220289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.220652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.220680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.221019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.221050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.221283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.221315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.221756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.221785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.222122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.222156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.222506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.222535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.222826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.222854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.223219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.223249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.223613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.223642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.224095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.224125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.224494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.224523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.224877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.224906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.225288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.225318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.225684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.225713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.226091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.226120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.226342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.226370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.226738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.226768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.227123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.227152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.227537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.227565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.227819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.227847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.228093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.228125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.228480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.228508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.228915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.228945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.229289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.229319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.229500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.229529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.229682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.229710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.230071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.230100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.230453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.230483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.428 [2024-10-08 18:44:59.230822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.428 [2024-10-08 18:44:59.230851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.428 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.231091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.231119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.231384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.231418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.231874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.231902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.232300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.232330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.232697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.232730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.233012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.233044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.233441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.233469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.233826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.233856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.234109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.234140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.234524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.234553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.234908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.234938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.235385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.235416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.235788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.235819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.236168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.236198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.236453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.236481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.236826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.236855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.237197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.237227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.237358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.237389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.237667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.237699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.237913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.237943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.238294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.238325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.238563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.238592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.238994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.239023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.239391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.239420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.239785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.239814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.240074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.240103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.240348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.240377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.240743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.240772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.241128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.241158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.241530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.241559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.241872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.241902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.242157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.242187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.242609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.242638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.242995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.243025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.243246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.243274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.243594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.243623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.244003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.244033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.244386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.244414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.429 [2024-10-08 18:44:59.244655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.429 [2024-10-08 18:44:59.244686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.429 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.244932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.244961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.245344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.245373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.245625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.245654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.246008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.246037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.246394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.246422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.246679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.246714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.247043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.247074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.247480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.247508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.247764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.247792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.248147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.248177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.248491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.248519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.248869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.248898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.249243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.249274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.249660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.249689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.250059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.250089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.250383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.250411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.250800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.250830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.251133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.251162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.251515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.251543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.251938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.251968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.252349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.252377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.252628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.252660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.253030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.253062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.253413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.253442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.253797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.253826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.254186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.254215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.254481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.254509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.254866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.254896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.255126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.255156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.255424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.255453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.255811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.255842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.256201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.256233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.256577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.256606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.256951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.256991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.257351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.257381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.430 qpair failed and we were unable to recover it. 00:29:05.430 [2024-10-08 18:44:59.257763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.430 [2024-10-08 18:44:59.257791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.258146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.258175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.258532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.258560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.258873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.258901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.259151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.259180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.259396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.259427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.259680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.259710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.260092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.260124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.260492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.260521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.260884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.260912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.261140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.261177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.261528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.261557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.261893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.261921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.262153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.262185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.262560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.262588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.262972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.263013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.263361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.263389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.263757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.263786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.264109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.264141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.264517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.264546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.264904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.264933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.265355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.265384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.265748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.265777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.266140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.266171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.266522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.266552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.266895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.266923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.267311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.267341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.267694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.267722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.268015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.268045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.268339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.268368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.268723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.268752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.269122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.269154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.269517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.269546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.269906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.269935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.270303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.270332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.270668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.270697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.271064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.271095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.271469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.271498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.271863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.271894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.272262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.272292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.431 [2024-10-08 18:44:59.272535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-10-08 18:44:59.272567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.431 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.272929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.272958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.273331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.273361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.273723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.273751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.274188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.274217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.274570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.274598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.274982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.275011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.275378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.275407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.275736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.275766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.276020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.276050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.276387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.276421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.276748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.276777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.277139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.277168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.277534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.277563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.277926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.277955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.278394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.278424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.278780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.278808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.279154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.279183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.279561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.279590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.279932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.279962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.280226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.280257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.280642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.280670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.281039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.281069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.281438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.281467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.281828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.281858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.282224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.282255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.282601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.282630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.282995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.283026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.283375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.283405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.283753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.283782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.283916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.283947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.284294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.284325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.284691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.284726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.285086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.285117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.285523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.285553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.287450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.287514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.287878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.287913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.288361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.288394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.288749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.288777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.289124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.289156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.432 [2024-10-08 18:44:59.289520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-10-08 18:44:59.289550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.432 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.289909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.289938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.290187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.290219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.290560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.290590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.292820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.292882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.293290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.293325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.293707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.293735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.294093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.294123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.294470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.294501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.294792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.294821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.295052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.295094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.295439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.295469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.295844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.295873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.296229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.296261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.296631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.296661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.296913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.296949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.297216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.297248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.297454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.297482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.297846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.297875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.298249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.298282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.298620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.298651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.299023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.299053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.299417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.299447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.299796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.299827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.300207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.300238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.300606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.300634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.301003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.301034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.301374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.301404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.301773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.301802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.302164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.302195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.302564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.302593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.303006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.303036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.303396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.303424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.303787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.303815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.304189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.304219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.304566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.304596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.304952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.304992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.306817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.306879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.307317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.307354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.307734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.307766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.308197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-10-08 18:44:59.308228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.433 qpair failed and we were unable to recover it. 00:29:05.433 [2024-10-08 18:44:59.308468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.308500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.308853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.308882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.309222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.309253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.309610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.309639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.309951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.309993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.310349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.310378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.310741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.310769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.311136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.311166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.311498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.311527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.311906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.311943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.312382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.312413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.312776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.312805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.313142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.313173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.313522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.313551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.313920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.313948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.314318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.314349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.314708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.314737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.315097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.315127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.315502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.315532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.315913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.315942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.316314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.316346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.316708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.316738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.317100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.317130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.317510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.317540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.317906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.317935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.318309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.318339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.318751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.318781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.319109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.319139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.319487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.319516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.319879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.319910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.320277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.320311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.320665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.320695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.321049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.321079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.321436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.321467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.321749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.321777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.322021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.434 [2024-10-08 18:44:59.322054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.434 qpair failed and we were unable to recover it. 00:29:05.434 [2024-10-08 18:44:59.322449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.322480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.322838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.322867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.323292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.323322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.323666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.323697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.324031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.324061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.324484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.324513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.324739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.324771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.325122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.325152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.325529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.325558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.325920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.325949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.326333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.326363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.326778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.326807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.327230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.327259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.327603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.327644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.328010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.328043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.328416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.328444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.328815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.328845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.329203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.329234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.329489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.329518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.329745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.329774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.330157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.330187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.330565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.330596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.330953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.330997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.331337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.331366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.331731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.331759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.332129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.332158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.332535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.332562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.332936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.332967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.333388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.333417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.333761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.333791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.334142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.334173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.334430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.334459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.334803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.334832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.335199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.335232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.335570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.335599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.335963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.336005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.336363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.336391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.336735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.336763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.337139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.337169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.337527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.337556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.435 qpair failed and we were unable to recover it. 00:29:05.435 [2024-10-08 18:44:59.337899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.435 [2024-10-08 18:44:59.337929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.338322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.338353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.338646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.338674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.339036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.339066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.339444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.339473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.339837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.339867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.340239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.340269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.340508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.340536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.340906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.340934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.341301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.341330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.341686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.341714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.342051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.342082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.342330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.342358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.342714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.342748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.343105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.343138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.343482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.343509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.343877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.343905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.344311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.344341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.344672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.344700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.345000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.345029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.345292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.345320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.345647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.345674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.346060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.346090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.346329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.346358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.346704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.346732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.347099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.347129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.347475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.347503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.347863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.347892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.348217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.348247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.348603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.348632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.348996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.349026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.349282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.349316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.349650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.349678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.350051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.350081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.350448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.350476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.350829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.350857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.351090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.351120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.351468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.351496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.351850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.351879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.352268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.352297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.436 [2024-10-08 18:44:59.352668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.436 [2024-10-08 18:44:59.352696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.436 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.353052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.353082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.353458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.353486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.353851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.353879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.354245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.354275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.354518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.354545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.354890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.354918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.355233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.355264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.355509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.355537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.355887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.355917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.356301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.356331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.356700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.356728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.357091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.357120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.357516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.357550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.357915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.357944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.358213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.358244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.358596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.358627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.358968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.359008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.359349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.359378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.359742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.359770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.360134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.360163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.360531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.360559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.360927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.360956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.361331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.361360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.361798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.361826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.362085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.362115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.362501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.362529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.362901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.362931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.363285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.363316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.363686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.363715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.364079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.364109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.364460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.364487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.364734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.364762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.364989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.365020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.365347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.365375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.365737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.365766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.366124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.366153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.366527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.366554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.366819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.366847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.367213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.367243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.367599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.437 [2024-10-08 18:44:59.367629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.437 qpair failed and we were unable to recover it. 00:29:05.437 [2024-10-08 18:44:59.367992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.368022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.368378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.368407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.368645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.368676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.369023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.369053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.369421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.369450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.369818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.369847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.370206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.370237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.370604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.370632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.371002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.371031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.371391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.371419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.371782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.371809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.372155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.372185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.372559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.372594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.372935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.372964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.373355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.373383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.373746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.373775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.374144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.374174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.374539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.374567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.374929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.374957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.375256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.375285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.375644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.375672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.376036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.376065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.376305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.376335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.376723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.376752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.377117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.377145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.377491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.377519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.377766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.377794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.378160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.378191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.378557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.378586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.378950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.378994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.379247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.379275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.379627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.379656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.380042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.380071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.380427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.380455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.380834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.438 [2024-10-08 18:44:59.380863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.438 qpair failed and we were unable to recover it. 00:29:05.438 [2024-10-08 18:44:59.381210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.381242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.381529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.381557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.381925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.381953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.382308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.382338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.382706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.382735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.383089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.383121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.383548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.383576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.383928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.383955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.384385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.384414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.384775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.384803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.385155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.385184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.385556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.385584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.385948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.385998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.386387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.386417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.386783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.386812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.387165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.387195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.387559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.387586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.387949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.387998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.388347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.388375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.388739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.388767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.389152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.389182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.389436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.389468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.389815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.389843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.390196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.390226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.390587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.390616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.390996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.391025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.391244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.391276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.391646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.391676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.392013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.392044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.392421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.392450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.392809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.392837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.393221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.393251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.393617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.393645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.394010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.394042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.394397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.394424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.394782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.394809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.395155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.395185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.395437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.395469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.395818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.395847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.439 [2024-10-08 18:44:59.396197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.439 [2024-10-08 18:44:59.396226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.439 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.396584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.396613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.396989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.397019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.397357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.397385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.397751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.397780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.398142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.398177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.398533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.398561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.398937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.398965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.399271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.399301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.399658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.399687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.400050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.400080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.400249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.400280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.400659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.400687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.401036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.401066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.401455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.401483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.401837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.401864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.402221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.402250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.402521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.402549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.402933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.402967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.403348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.403377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.403754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.403782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.404054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.404082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.404446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.404474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.404842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.404870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.405311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.405341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.405678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.405705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.406154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.406183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.406537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.406565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.406838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.406865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.407289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.407320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.407683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.407712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.408089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.408119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.408485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.408516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.408768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.408796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.409167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.409196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.409553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.409583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.409956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.409996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.410411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.410439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.410787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.410821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.411175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.411204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.411442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.440 [2024-10-08 18:44:59.411470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.440 qpair failed and we were unable to recover it. 00:29:05.440 [2024-10-08 18:44:59.411833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.411862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.412218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.412246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.412594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.412622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.412878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.412909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.413287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.413324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.413670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.413706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.414038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.414067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.414412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.414440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.414791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.414819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.415144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.415173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.415557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.415585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.415960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.416004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.416364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.416392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.416752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.416781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.417124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.417155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.417515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.417543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.417928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.417956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.418275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.418305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.418601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.418629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.419000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.419031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.419403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.419432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.419797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.419825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.420200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.420229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.420565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.420593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.420942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.420971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.421333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.421362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.421726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.421754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.422101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.422131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.422508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.422536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.422908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.422936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.423293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.423322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.423687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.423716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.424069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.424099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.424368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.424397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.424715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.424744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.425184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.425213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.425621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.425649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.426020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.426049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.426424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.426451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.426829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.426858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.441 qpair failed and we were unable to recover it. 00:29:05.441 [2024-10-08 18:44:59.427304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.441 [2024-10-08 18:44:59.427333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.427583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.427611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.427953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.427992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.428424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.428452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.428812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.428844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.429101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.429131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.429508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.429536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.429900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.429928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.430166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.430196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.430454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.430482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.430869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.430898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.431320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.431350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.431719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.431748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.432131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.432161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.432533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.432563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.432829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.432857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.433120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.433153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.433450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.433478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.433720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.433749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.434095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.434126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.434479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.434507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.434743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.434774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.435211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.435240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.435602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.435630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.435889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.435917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.436310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.436341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.436726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.436754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.437129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.437158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.437507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.437536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.437912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.437940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.438308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.438337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.438729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.438758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.439125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.439154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.439504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.439533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.439768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.439797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.440074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.440103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.440424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.440452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.442 [2024-10-08 18:44:59.440799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.442 [2024-10-08 18:44:59.440827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.442 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.441209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.441238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.441490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.441521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.441743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.441772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.442045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.442075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.442398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.442426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.442781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.442809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.443063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.443099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.443474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.443503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.443886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.443914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.444275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.444303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.444664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.444693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.445048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.445078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.445458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.445486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.445837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.445865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.446244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.446274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.446637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.446665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.447030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.447060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.447445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.447475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.447923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.447951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.448243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.448272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.448642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.448672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.449126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.449157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.449525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.449553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.449908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.449936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.450167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.450197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.450553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.450583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.450969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.451009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.451360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.451390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.451763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.451793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.452162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.452193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.452415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.452445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.452817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.452845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.453249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.453278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.453642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.453672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.454014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.454045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.454372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.454400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.454643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.454675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.455037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.455067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.455331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.455362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.443 qpair failed and we were unable to recover it. 00:29:05.443 [2024-10-08 18:44:59.455750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.443 [2024-10-08 18:44:59.455779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.456043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.456072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.456434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.456462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.456837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.456867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.457307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.457337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.457706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.457735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.458110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.458141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.458526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.458570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.458927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.458957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.459360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.459392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.459745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.459774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.460202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.460233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.460596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.460625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.460991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.461021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.461399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.461430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.461811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.461840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.462218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.462249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.462620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.462649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.463096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.463127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.463507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.463537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.463872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.463901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.464265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.464297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.464687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.464716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.465072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.465104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.465467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.465495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.465808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.465844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.466183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.466212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.466565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.466593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.466966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.467005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.467358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.467387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.467755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.467785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.468143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.468174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.468559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.468587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.468949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.468989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.469344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.469373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.469737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.469766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.469992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.470023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.470260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.470289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.470640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.470668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.471038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.471068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.444 [2024-10-08 18:44:59.471477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.444 [2024-10-08 18:44:59.471505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.444 qpair failed and we were unable to recover it. 00:29:05.445 [2024-10-08 18:44:59.471864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.471893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.472333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.472365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.472706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.472743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.473192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.473222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.473467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.473495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.473845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.473873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.474329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.474366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.474737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.474767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.474902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.474931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.475343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.475374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.475734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.475762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.475901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.716 [2024-10-08 18:44:59.475930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.716 qpair failed and we were unable to recover it. 00:29:05.716 [2024-10-08 18:44:59.476176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.476206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.476474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.476507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.476861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.476891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.477241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.477271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.477635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.477663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.478024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.478053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.478433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.478461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.478719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.478746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.479127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.479158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.479427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.479455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.479819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.479848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.480204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.480233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.480609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.480638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.480865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.480894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.481273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.481303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.481670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.481699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.481960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.482001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.482327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.482355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.482702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.482732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.483106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.483135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.483505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.483533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.483902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.483932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.484327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.484357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.484698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.484727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.485096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.485125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.485435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.485463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.485817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.485845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.486101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.486130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.486499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.486527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.486767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.486798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.487143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.487172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.487554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.487583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.487969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.488011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.488262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.488290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.488656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.488690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.488825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.488856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.489310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.489341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.489718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.489746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.489999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.490028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.490387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.717 [2024-10-08 18:44:59.490417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.717 qpair failed and we were unable to recover it. 00:29:05.717 [2024-10-08 18:44:59.490634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.490666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.490910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.490940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.491331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.491362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.491620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.491648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.491865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.491893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.492226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.492256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.492609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.492638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.493020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.493050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.493404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.493434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.493665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.493696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.493823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.493851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.494181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.494213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.494658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.494690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.495030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.495060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.495320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.495352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.495733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.495762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.496100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.496130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.496518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.496546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.496792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.496819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.497179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.497210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.497566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.497595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.497867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.497897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.498157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.498188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.498565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.498592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.498968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.499010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.499350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.499379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.499759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.499787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.500158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.500187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.500561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.500590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.500847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.500876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.501154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.501183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.501492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.501522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.501878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.501907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.502150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.502180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.502559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.502597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.502996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.503027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.503295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.503324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.503697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.503726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.504095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.504124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.718 [2024-10-08 18:44:59.504496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.718 [2024-10-08 18:44:59.504525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.718 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.504960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.505003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.505395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.505424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.505862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.505890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.506258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.506287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.506642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.506669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.507037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.507067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.507325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.507358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.507708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.507738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.508171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.508202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.508564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.508593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.508990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.509019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.509416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.509445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.509810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.509839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.510195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.510226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.510579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.510608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.510991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.511021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.511248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.511276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.511626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.511664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.512012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.512042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.512380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.512408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.512763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.512791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.513154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.513186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.513537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.513566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.513937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.513966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.514223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.514252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.514625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.514654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.514901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.514928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.515184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.515214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.515578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.515606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.515987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.516016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.516393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.516422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.516703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.516730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.516968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.517008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.517351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.517380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.517745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.517779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.518041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.518071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.518437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.518466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.518834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.518864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.519106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.519136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.519513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.719 [2024-10-08 18:44:59.519542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.719 qpair failed and we were unable to recover it. 00:29:05.719 [2024-10-08 18:44:59.519923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.519952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.520298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.520326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.520674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.520703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.521053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.521083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.521447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.521476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.521720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.521749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.522001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.522032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.522421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.522450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.522629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.522657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.523033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.523064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.523479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.523508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.523862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.523891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.524262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.524291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.524660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.524688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.525140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.525169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.525540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.525570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.525944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.525973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.526367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.526395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.526798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.526826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.527202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.527232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.527608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.527636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.528071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.528102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.528522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.528551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.528899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.528928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.529308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.529337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.529632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.529659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.530033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.530065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.530435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.530464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.530715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.530747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.531016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.531045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.531454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.531482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.531848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.531877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.532224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.532254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.532668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.532696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.532929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.532973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.533365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.533394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.533765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.720 [2024-10-08 18:44:59.533793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.720 qpair failed and we were unable to recover it. 00:29:05.720 [2024-10-08 18:44:59.534041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.534070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.534436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.534463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.534842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.534872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.535219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.535250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.535619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.535647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.536008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.536037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.536397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.536425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.536801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.536829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.537205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.537235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.537550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.537578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.537943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.537971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.538336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.538366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.538733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.538763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.539142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.539173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.539510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.539540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.539911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.539939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.540232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.540261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.540611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.540638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.540990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.541020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.541394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.541422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.541779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.541808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.542172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.542202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.542570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.542599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.542842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.542869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.543252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.543283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.543645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.543673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.544036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.544065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.544424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.544453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.544813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.544842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.545233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.545263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.545625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.545653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.546022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.546052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.546419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.546447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.546694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.546727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.547082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.547112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.547475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.547503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.721 [2024-10-08 18:44:59.547804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.721 [2024-10-08 18:44:59.547833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.721 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.548203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.548238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.548595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.548624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.548999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.549029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.549382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.549410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.549779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.549807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.550178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.550207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.550458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.550490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.550869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.550900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.551146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.551175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.551528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.551558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.551923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.551951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.552318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.552346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.552706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.552734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.553099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.553128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.553386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.553414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.553610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.553639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.554013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.554043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.554381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.554409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.554778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.554806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.555153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.555183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.555553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.555581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.555922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.555951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.556319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.556348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.556749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.556777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.556969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.557009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.557424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.557452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.557810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.557838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.558186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.558217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.558573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.558602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.558967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.559026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.559396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.559424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.559782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.559810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.560085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.560117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.560475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.560505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.560837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.560866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.561207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.561238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.561596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.561625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.561952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.561995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.562349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.562377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.722 [2024-10-08 18:44:59.562628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.722 [2024-10-08 18:44:59.562660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.722 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.563035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.563073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.563330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.563359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.563738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.563767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.564127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.564157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.564516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.564545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.564917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.564946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.565389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.565419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.565800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.565829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.566204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.566233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.566590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.566618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.566972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.567012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.567349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.567379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.567729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.567759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.568124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.568154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.568505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.568534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.568879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.568907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.569145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.569175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.569610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.569640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.569967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.570010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.570360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.570389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.570725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.570753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.571114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.571144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.571510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.571540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.571922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.571950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.572318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.572349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.572785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.572813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.573198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.573235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.573609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.573638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.574008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.574037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.574410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.574438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.574779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.574808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.575166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.575196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.575552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.575580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.575945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.575995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.576356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.576384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.576635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.576664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.577014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.577044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.577407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.577436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.577811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.577841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.578147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.723 [2024-10-08 18:44:59.578178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.723 qpair failed and we were unable to recover it. 00:29:05.723 [2024-10-08 18:44:59.578436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.578470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.578866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.578894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.579288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.579320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.579715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.579745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.580117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.580147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.580512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.580541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.580776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.580807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.581154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.581183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.581550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.581579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.581946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.581986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.582356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.582385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.582688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.582718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.583093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.583123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.583495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.583524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.583769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.583798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.584057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.584086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.584475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.584504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.584796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.584824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.585159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.585188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.585556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.585585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.585945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.585984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.586340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.586368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.586744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.586772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.587007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.587038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.587445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.587473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.587725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.587754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.588107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.588136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.588516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.588545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.588919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.588947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.589349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.589379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.589741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.589769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.590118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.590147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.590512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.590542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.590908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.590938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.591105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.591138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.591343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.591374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.591746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.591776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.592144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.592174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.592570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.592598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.592953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.592992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.724 [2024-10-08 18:44:59.593336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.724 [2024-10-08 18:44:59.593371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.724 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.593739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.593767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.594127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.594156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.594507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.594536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.594776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.594804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.595056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.595087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.595440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.595470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.595855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.595883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.596234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.596263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.596472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.596501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.596863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.596892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.597182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.597211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.597570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.597599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.597961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.598005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.598338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.598368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.598732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.598760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.599129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.599160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.599522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.599550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.599922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.599951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.600224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.600253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.600641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.600669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.601043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.601074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.601446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.601474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.601834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.601863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.602229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.602258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.602620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.602648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.603011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.603042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.603403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.603438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.603800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.603828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.604199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.604229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.604578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.604606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.604842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.604873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.605173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.605202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.605549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.605578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.605940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.605969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.725 qpair failed and we were unable to recover it. 00:29:05.725 [2024-10-08 18:44:59.606347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.725 [2024-10-08 18:44:59.606375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.606752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.606780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.607145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.607174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.607533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.607561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.607913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.607942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.608294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.608324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.608690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.608721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.609079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.609109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.609484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.609512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.609856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.609884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.610251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.610280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.610515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.610542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.610911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.610941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.611307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.611336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.611717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.611745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.612102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.612132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.612477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.612506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.612929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.612958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.613356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.613387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.613770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.613799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.614151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.614181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.614582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.614610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.614989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.615020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.615369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.615398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.615829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.615858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.616228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.616258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.616622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.616651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.616890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.616920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.617311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.617341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.617703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.617733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.618095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.618124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.618490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.618518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.618888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.618923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.619282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.619312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.619747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.619776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.620137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.620167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.620529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.620558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.620952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.620991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.621293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.621323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.621654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.621682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.726 [2024-10-08 18:44:59.621919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.726 [2024-10-08 18:44:59.621947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.726 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.622273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.622303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.622678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.622707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.623034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.623064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.623438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.623467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.623709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.623736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.624091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.624121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.624474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.624503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.624873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.624902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.625348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.625377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.625704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.625733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.626099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.626128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.626495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.626523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.626748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.626779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.627048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.627077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.627444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.627472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.627830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.627858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.628237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.628267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.628626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.628654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.629022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.629052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.629402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.629429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.629774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.629802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.630176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.630206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.630458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.630485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.630717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.630745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.631109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.631138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.631508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.631537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.631900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.631928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.632297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.632327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.632692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.632720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.633042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.633071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.633441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.633469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.633828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.633861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.634131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.634160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.634592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.634620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.635019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.635048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.635418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.635447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.635687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.635719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.636077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.636107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.636474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.636502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.636761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.727 [2024-10-08 18:44:59.636792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.727 qpair failed and we were unable to recover it. 00:29:05.727 [2024-10-08 18:44:59.637065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.637094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.637447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.637474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.637722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.637750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.638094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.638125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.638548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.638577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.638930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.638959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.639330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.639358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.639734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.639764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.640119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.640150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.640501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.640528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.640898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.640928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.641291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.641320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.641679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.641707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.642071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.642101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.642469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.642497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.642860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.642887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.643141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.643171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.643536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.643564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.643930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.643958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.644340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.644369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.644729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.644758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.645098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.645128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.645494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.645523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.645791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.645819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.646212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.646244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.646644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.646673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.646919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.646949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.647316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.647346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.647708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.647736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.648116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.648145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.648511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.648539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.648898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.648932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.649304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.649335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.649693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.649722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.650084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.650114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.650481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.650508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.650864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.650892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.651262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.651292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.651656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.651684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.728 [2024-10-08 18:44:59.652053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.728 [2024-10-08 18:44:59.652082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.728 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.652428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.652456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.652770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.652799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.653141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.653170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.653518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.653548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.653904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.653932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.654335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.654366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.654633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.654660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.654887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.654916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.655337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.655367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.655723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.655750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.656109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.656137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.656506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.656534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.656891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.656920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.657263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.657292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.657650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.657678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.658047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.658078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.658435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.658462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.658824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.658853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.659214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.659244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.659610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.659638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.660002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.660031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.660378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.660406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.660781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.660810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.661168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.661197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.661577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.661605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.661968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.662036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.662294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.662322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.662672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.662700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.663067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.663097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.663368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.663396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.663755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.663787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.664137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.664172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.664590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.664618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.664985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.665015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.665366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.729 [2024-10-08 18:44:59.665397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.729 qpair failed and we were unable to recover it. 00:29:05.729 [2024-10-08 18:44:59.665745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.665776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.666146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.666177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.666445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.666472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.666854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.666883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.667306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.667335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.667687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.667719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.668083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.668112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.668482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.668510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.668884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.668911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.669257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.669286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.669657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.669686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.670050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.670079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.670320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.670352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.670703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.670732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.671075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.671104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.671473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.671501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.671871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.671898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.672261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.672291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.672463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.672495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.672866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.672894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.673238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.673269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.673628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.673657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.673911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.673939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.674335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.674365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.674754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.674781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.675136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.675165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.675541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.675569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.675943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.675972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.676414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.676441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.676768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.676798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.677167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.677198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.677535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.677562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.677926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.677954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.678321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.678350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.678720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.678748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.679085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.679116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.679462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.679496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.679739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.679767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.680135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.680165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.680531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.680559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.730 qpair failed and we were unable to recover it. 00:29:05.730 [2024-10-08 18:44:59.680931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.730 [2024-10-08 18:44:59.680959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.681335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.681365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.681716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.681744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.682101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.682132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.682534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.682562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.682927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.682957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.683311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.683342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.683707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.683736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.684103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.684142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.684513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.684543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.684786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.684818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.685172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.685203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.685546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.685576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.685825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.685854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.686203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.686234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.686470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.686498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.686858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.686886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.687229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.687259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.687633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.687662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.687890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.687920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.688291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.688322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.688669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.688698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.689126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.689156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.689541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.689571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.689922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.689951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.690321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.690350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.690606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.690635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.690902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.690932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.691320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.691351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.691716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.691746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.692111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.692140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.692542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.692571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.692925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.692953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.693314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.693345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.693594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.693623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.693999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.694029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.694359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.694395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.694738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.694766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.695130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.695161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.695524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.695552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.731 qpair failed and we were unable to recover it. 00:29:05.731 [2024-10-08 18:44:59.695724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.731 [2024-10-08 18:44:59.695751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.696128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.696158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.696487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.696517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.696880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.696908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.697331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.697360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.697718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.697747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.698042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.698071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.698443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.698472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.698839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.698866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.699209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.699239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.699588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.699617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.699992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.700021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.700380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.700409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.700778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.700806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.701170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.701199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.701565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.701593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.701997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.702027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.702385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.702413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.702776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.702803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.703163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.703192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.703629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.703657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.703996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.704026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.704246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.704279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.704541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.704570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.704919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.704947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.705327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.705357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.705709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.705739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.706109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.706137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.706514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.706542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.706946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.706992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.707346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.707375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.707736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.707764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.708174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.708204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.708560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.708588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.708952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.708990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.709224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.709252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.709483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.709516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.709894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.709923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.710292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.710322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.710679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.710706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.732 [2024-10-08 18:44:59.710946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.732 [2024-10-08 18:44:59.710991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.732 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.711426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.711455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.711806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.711833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.712101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.712130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.712407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.712435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.712852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.712880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.713142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.713171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.713541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.713569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.713936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.713964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.714353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.714381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.714635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.714664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.714966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.715016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.715377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.715405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.715700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.715729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.716083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.716113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.716450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.716479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.716844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.716873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.717266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.717296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.717671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.717700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.718059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.718088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.718426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.718454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.718595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.718627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.718844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.718877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.719257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.719288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.719648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.719676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.720050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.720079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.720454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.720483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.720869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.720896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.721271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.721300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.721642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.721671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.722046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.722075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.722447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.722475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.722705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.722735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.723125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.723155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.723534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.723564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.723941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.723969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.724397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.724432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.724786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.733 [2024-10-08 18:44:59.724815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.733 qpair failed and we were unable to recover it. 00:29:05.733 [2024-10-08 18:44:59.725193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.725222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.725459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.725490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.725838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.725867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.726091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.726120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.726501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.726529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.726751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.726783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.727147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.727177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.727553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.727582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.727969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.728011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.728254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.728282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.728683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.728712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.729070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.729100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.729473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.729502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.729827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.729854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.730108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.730137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.730534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.730562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.730872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.730900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.731271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.731301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.731577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.731604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.731854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.731882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.732145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.732174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.732539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.732567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.732930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.732959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.733218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.733247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.733465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.733493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.733865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.733894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.734264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.734294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.734639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.734667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.735083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.735113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.735474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.735502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.735762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.735790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.736019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.736048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.736455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.736483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.736851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.736879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.737245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.737275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.737503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.737533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.737798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.737828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.738051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.738082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.738446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.734 [2024-10-08 18:44:59.738479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.734 qpair failed and we were unable to recover it. 00:29:05.734 [2024-10-08 18:44:59.738865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.738893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.739274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.739304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.739557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.739586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.739972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.740012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.740399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.740428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.740766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.740796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.741167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.741197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.741578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.741607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.742042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.742071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.742296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.742323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.742757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.742785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.743119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.743150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.743494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.743522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.743896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.743924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.744300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.744329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.744581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.744608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.744953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.744994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.745359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.745388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.745524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.745554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.745938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.745967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.746404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.746433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.746660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.746688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.747006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.747037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.747335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.747365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.747626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.747658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.748012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.748043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.748439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.748468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.748831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.748859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.749302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.749331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.749691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.749719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.750053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.750082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.750454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.750482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.750736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.750763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.751009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.751039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.751397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.751425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.751784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.751812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.752126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.752155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.752534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.752562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.752777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.752804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.753163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.735 [2024-10-08 18:44:59.753198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.735 qpair failed and we were unable to recover it. 00:29:05.735 [2024-10-08 18:44:59.753575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.753603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.753992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.754021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.754286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.754318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.754726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.754754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.755124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.755154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.755518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.755547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.755847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.755876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.756232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.756261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.756447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.756474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.756852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.756881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.757147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.757176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.757540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.757568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.757820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.757849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.758243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.758273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.758390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.758421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.758662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.758691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.758937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.758965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.759402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.759430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.759778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.759806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.760157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.760186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.760553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.760581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.760950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.760989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.761365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.761393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.761767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.761795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:05.736 [2024-10-08 18:44:59.762090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.736 [2024-10-08 18:44:59.762120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:05.736 qpair failed and we were unable to recover it. 00:29:06.013 [2024-10-08 18:44:59.762496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.013 [2024-10-08 18:44:59.762529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.013 qpair failed and we were unable to recover it. 00:29:06.013 [2024-10-08 18:44:59.762926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.013 [2024-10-08 18:44:59.762955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.013 qpair failed and we were unable to recover it. 00:29:06.013 [2024-10-08 18:44:59.763208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.763237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.763557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.763586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.763844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.763873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.764263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.764293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.764643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.764671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.765038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.765068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.765435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.765463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.765825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.765853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.766184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.766213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.766467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.766495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.766829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.766858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.767206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.767236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.767599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.767633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.767995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.768026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.768374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.768402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.768773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.768801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.769154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.769183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.769433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.769467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.769806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.769836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.770211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.770240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.770609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.770638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.771039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.771069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.771308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.771336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.771680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.771708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.771965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.772004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.772396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.772425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.772861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.772889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.773232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.773263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.773612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.773640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.774000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.774029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.774400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.774428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.774805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.774834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.775165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.775194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.775565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.775593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.775933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.775962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.776353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.776382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.776731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.776760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.777048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.777077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.777438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.777466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.777904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.777933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.778364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.778393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.778755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.778784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.779143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.779172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.779586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.779613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.779960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.780013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.780419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.780447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.780813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.780841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.781151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.781180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.781556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.781585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.781930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.781958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.782321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.782350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.782709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.782737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.783094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.783129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.783468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.783504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.783862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.783890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.784248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.784278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.784629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.784658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.785034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.785063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.785428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.785456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.785800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.785828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.786203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.786232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.786583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.786611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.787035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.787063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.787424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.787451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.787815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.787843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.788197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.788227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.788582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.788610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.788985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.789015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.789404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.789431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.789787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.789814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.790176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.790205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.790431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.790461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.790902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.790931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.791214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.791243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.791620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.014 [2024-10-08 18:44:59.791648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.014 qpair failed and we were unable to recover it. 00:29:06.014 [2024-10-08 18:44:59.791903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.791935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.792300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.792330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.792610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.792639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.792994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.793023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.793390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.793425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.793770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.793799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.794154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.794184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.794549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.794578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.794942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.794970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.795353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.795382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.795742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.795770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.796126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.796156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.796531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.796559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.796929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.796959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.797196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.797226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.797563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.797592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.797958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.797999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.798342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.798371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.798732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.798761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.799097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.799128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.799497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.799525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.799894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.799923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.800203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.800238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.800626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.800655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.801010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.801039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.801422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.801450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.801686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.801714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.802066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.802096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.802338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.802369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.802720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.802749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.803105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.803135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.803492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.803522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.803886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.803914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.804272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.804302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.804661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.804688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.805050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.805079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.805463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.805491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.805859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.805886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.806241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.806270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.806713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.806741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.807076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.807106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.807468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.807495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.807875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.807903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.808251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.808280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.808639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.808673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.809037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.809067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.809439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.809466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.809843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.809871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.810217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.810247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.810609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.810638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.810995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.811025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.811393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.811421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.811778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.811806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.812200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.812229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.812571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.812599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.812970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.813010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.813355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.813383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.813723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.813751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.814115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.814146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.814546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.814574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.814822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.814849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.815220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.815249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.815606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.815635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.815943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.815970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.816348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.816377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.816793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.816821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.817154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.817183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.817425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.817453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.817793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.817828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.818202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.818230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.818597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.818626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.818994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.819025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.819384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.819412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.819854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.819882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.820219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.820250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.820617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.015 [2024-10-08 18:44:59.820645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.015 qpair failed and we were unable to recover it. 00:29:06.015 [2024-10-08 18:44:59.820894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.820922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.821283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.821313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.821748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.821777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.822139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.822168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.822529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.822557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.822916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.822943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.823336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.823365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.823728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.823756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.824093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.824130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.824488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.824516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.824879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.824907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.825270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.825300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.825659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.825687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.826124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.826153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.826523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.826551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.826921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.826950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.827204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.827234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.827606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.827634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.828017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.828047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.828441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.828470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.828801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.828830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.829217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.829247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.829601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.829630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.830005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.830035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.830284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.830312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.830681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.830708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.831050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.831080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.831395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.831425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.831792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.831820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.832180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.832211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.832570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.832598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.832933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.832963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.833335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.833363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.833723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.833752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.834114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.834143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.834522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.834551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.834910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.834939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.835305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.835334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.835706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.835734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.836091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.836120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.836486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.836514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.836877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.836906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.837280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.837309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.837571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.837599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.837963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.838003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.838353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.838381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.838740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.838768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.839108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.839138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.839512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.839546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.839892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.839922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.840319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.840350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.840718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.840747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.841188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.841219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.841576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.841605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.841967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.842007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.842379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.842408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.842762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.842790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.843150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.843179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.843405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.843438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.843802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.843831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.844209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.844239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.844597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.844625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.845088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.845118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.845486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.845514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.845858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.845885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.846124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.846153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.846512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.846541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.846912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.846939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.016 [2024-10-08 18:44:59.847395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.016 [2024-10-08 18:44:59.847424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.016 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.847775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.847804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.848174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.848205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.848563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.848592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.848962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.849001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.849348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.849377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.849748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.849776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.850138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.850168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.850529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.850556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.850890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.850919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.851293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.851323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.851685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.851714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.852082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.852112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.852483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.852511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.852872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.852899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.853264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.853294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.853669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.853697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.854058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.854087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.854455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.854483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.854754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.854781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.855101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.855136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.855505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.855533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.855842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.855869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.856235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.856264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.856629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.856658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.857022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.857051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.857273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.857305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.857651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.857682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.858010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.858039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.858428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.858456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.858818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.858846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.859068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.859099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.859487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.859517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.859886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.859915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.860302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.860333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.860732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.860762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.861118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.861147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.861508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.861536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.861891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.861928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.862314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.862343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.862723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.862751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.863102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.863132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.863582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.863610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.863943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.863971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.864342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.864370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.864736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.864764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.865127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.865158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.865520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.865548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.865914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.865943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.866336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.866366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.866698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.866727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.867131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.867160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.867389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.867417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.867803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.867831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.868199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.868229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.868595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.868622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.868999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.869028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.869388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.869416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.869771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.869799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.870165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.870194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.870560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.870595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.870949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.870986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.871353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.871381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.871719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.871747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.872116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.872146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.872510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.872538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.872901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.872929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.873298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.873327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.873668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.873696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.874070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.874099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.874468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.874496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.874838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.874866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.875204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.875234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.875603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.875631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.875993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.876023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.876379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.876408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.876778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.017 [2024-10-08 18:44:59.876806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.017 qpair failed and we were unable to recover it. 00:29:06.017 [2024-10-08 18:44:59.877153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.877183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.877515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.877543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.877919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.877947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.878288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.878317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.878607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.878635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.878879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.878911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.879128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.879158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.879374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.879405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.879750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.879780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.880140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.880170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.880552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.880581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.880931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.880959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.881322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.881351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.881712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.881740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.882133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.882163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.882515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.882542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.882885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.882912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.883275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.883305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.883669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.883697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.883933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.883964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.884329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.884359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.884790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.884818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.885176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.885206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.885565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.885601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.886018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.886048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.886402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.886430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.886792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.886821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.887186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.887215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.887467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.887498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.887683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.887712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.887957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.887994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.888346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.888375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.888739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.888767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.889106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.889137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.889487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.889515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.889872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.889900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.890268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.890297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.890715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.890744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.891086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.891125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.891492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.891519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.891893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.891920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.892341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.892370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.892732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.892759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.893008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.893037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.893438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.893466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.893830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.893857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.894124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.894154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.894532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.894561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.894809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.894841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.895210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.895240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.895608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.895638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.896005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.896036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.896387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.896415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.896774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.896802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.897173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.897202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.897575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.897603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.897972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.898020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.898235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.898266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.898640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.898668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.899031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.899062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.899426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.899457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.899811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.899840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.900230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.900259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.900457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.900491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.900846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.900874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.901241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.901270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.901646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.901674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.902037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.902066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.902414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.018 [2024-10-08 18:44:59.902442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.018 qpair failed and we were unable to recover it. 00:29:06.018 [2024-10-08 18:44:59.902804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.902833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.903208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.903236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.903466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.903496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.903902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.903930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.904286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.904315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.904667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.904694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.905081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.905110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.905474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.905502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.905871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.905900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.906265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.906295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.906657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.906685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.907045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.907074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.907443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.907470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.907843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.907870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.908227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.908256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.908652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.908680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.909022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.909052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.909430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.909457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.909821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.909849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.910227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.910258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.910636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.910663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.910957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.910999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.911222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.911254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.911564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.911593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.911950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.911989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.912350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.912378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.912707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.912736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.913099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.913128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.913506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.913535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.913909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.913936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.914312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.914342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.914680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.914709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.914922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.914950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.915309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.915340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.915745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.915779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.916198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.916229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.916576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.916605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.916960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.916998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.917341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.917370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.917737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.917766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.918130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.918161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.918595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.918624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.918999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.919030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.919362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.919390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.919749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.919779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.920218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.920247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.920599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.920628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.921012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.921041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.921401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.921429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.921797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.921826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.922206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.922236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.922580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.922609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.922991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.923021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.923250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.923280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.923744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.923773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.924129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.924158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.924509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.924537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.924710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.924741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.925053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.925101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.925459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.925487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.925852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.925880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.926169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.926200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.926551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.926578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.926934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.926962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.927341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.927370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.927751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.927779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.928155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.928184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.928552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.928580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.928921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.928949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.929372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.929401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.929729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.929758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.930122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.930152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.930518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.930546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.930910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.930939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.931280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.931316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.931653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.931683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.019 [2024-10-08 18:44:59.932003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.019 [2024-10-08 18:44:59.932033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.019 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.932388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.932417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.932769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.932797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.933158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.933188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.933543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.933572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.933825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.933857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.934155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.934185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.934546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.934574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.934823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.934855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.935207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.935237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.935606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.935634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.935992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.936021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.936357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.936386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.936756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.936784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.937084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.937113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.937361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.937389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.937741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.937770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.938013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.938046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.938411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.938439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.938791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.938820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.939210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.939240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.939602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.939630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.939993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.940022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.940366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.940395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.940760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.940788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.941146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.941176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.941539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.941568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.941910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.941939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.942309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.942339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.942589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.942618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.942991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.943020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.943366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.943395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.943760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.943789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.944129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.944159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.944533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.944561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.944919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.944948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.945313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.945343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.945708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.945738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.946112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.946149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.946404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.946436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.946786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.946814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.947183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.947212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.947467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.947496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.947878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.947907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.948167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.948197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.948549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.948579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.948931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.948962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.949359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.949391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.949765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.949794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.950166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.950196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.950559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.950587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.950821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.950852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.951216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.951245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.951598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.951626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.951997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.952026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.952460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.952489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.952822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.952850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.953234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.953263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.953624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.953652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.954020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.954050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.954404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.954432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.954809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.954837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.955205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.955235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.955586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.955614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.955984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.956014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.956262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.956291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.956655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.956683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.957055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.957086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.957442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.957471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.957848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.957876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.958232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.958262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.958625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.958654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.959091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.020 [2024-10-08 18:44:59.959120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.020 qpair failed and we were unable to recover it. 00:29:06.020 [2024-10-08 18:44:59.959451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.959481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.959846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.959875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.960117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.960150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.960423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.960452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.960811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.960840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.961242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.961278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.961626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.961654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.961938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.961967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.962381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.962410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.962793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.962822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.963116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.963145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.963526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.963555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.963927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.963956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.964365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.964395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.964752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.964782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.965151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.965182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.965363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.965391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.965662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.965691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.965990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.966024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.966425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.966453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.966822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.966851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.967200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.967230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.967484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.967512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.967866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.967894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.968253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.968284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.968650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.968678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.969043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.969073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.969453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.969482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.969820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.969848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.970092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.970124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.970502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.970531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.970777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.970805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.971157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.971188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.971598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.971626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.971988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.972020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.972358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.972387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.972737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.972765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.973113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.973143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.973515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.973543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.973901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.973930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.974285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.974315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.974564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.974596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.974948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.974987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.975321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.975349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.975725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.975755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.976130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.976165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.976421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.976453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.976824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.976852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.977203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.977234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.977592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.977619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.977874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.977902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.978251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.978281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.978618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.978647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.979038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.979067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.979304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.979332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.979699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.979727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.980091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.980121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.980505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.980534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.980906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.980934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.981316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.981346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.981688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.981716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.982012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.982042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.982303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.982331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.982697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.982736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.983083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.983113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.983492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.983520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.983786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.983813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.984160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.984190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.984435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.984466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.984838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.984867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.985241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.985271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.985637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.985667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.986028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.986059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.021 [2024-10-08 18:44:59.986491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.021 [2024-10-08 18:44:59.986519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.021 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.986888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.986915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.987285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.987315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.987652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.987680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.987934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.987961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.988308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.988338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.988563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.988594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.988935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.988963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.989316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.989345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.989685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.989714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.990106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.990136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.990563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.990591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.991046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.991076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.991498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.991526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.991760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.991788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.992193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.992223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.992565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.992594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.993011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.993041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.993433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.993462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.993835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.993864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.994217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.994249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.994608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.994638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.994898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.994927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.995207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.995236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.995615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.995644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.995995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.996025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.996481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.996509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.996847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.996876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.997104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.997137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.997512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.997540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.997907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.997935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.998323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.998353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.998607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.998635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.998999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.999029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.999391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.999419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:44:59.999670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:44:59.999699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.000110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.000143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.000500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.000529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.000903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.000932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.001195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.001231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.001491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.001520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.001889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.001919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.002288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.002319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.002462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.002493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.002865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.002894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.003259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.003290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.003641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.003669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.003795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.003822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.004205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.004234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.004624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.004651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.005015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.005046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.005331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.005361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.005693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.005721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.006118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.006148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.006425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.006453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.006706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.006736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.007101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.007130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.007479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.007508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.007822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.007852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.008037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.008066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.008441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.008470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.008723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.008754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.009153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.009182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.009686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.009715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.010097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.010127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.010969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.011023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.011406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.011438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.011793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.011824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.012254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.012287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.012721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.012754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.013123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.013156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.013551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.013584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.013851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.013880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.014173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.014204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.014475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.014504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.014768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.014799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.022 [2024-10-08 18:45:00.015101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.022 [2024-10-08 18:45:00.015133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.022 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.015484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.015515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.015880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.015909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.016298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.016334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.016478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.016507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.016790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.016822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.017287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.017318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.017582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.017610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.017880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.017908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.018236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.018266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.018531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.018560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.018913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.018942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.019389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.019419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.019664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.019692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.020064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.020094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.020426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.020455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.020832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.020861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.021100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.021133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.021283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.021312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.021731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.021760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.022150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.022180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.022472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.022501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.022774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.022803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.023165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.023195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.023555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.023584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.023812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.023842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.024213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.024243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.024620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.024649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.025021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.025051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.025389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.025417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.025675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.025704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.025927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.025955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.026436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.026465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.026870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.026900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.027286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.027315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.027689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.027718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.027994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.028023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.028370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.028398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.028751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.028779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.029062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.029091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.029360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.029389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.029698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.029727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.030140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.030170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.030403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.030437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.030846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.030874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.031330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.031361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.031743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.031771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.032169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.032200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.032610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.032638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.032873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.032901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.033313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.033342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.033714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.033742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.034125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.034155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.034512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.034540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.034942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.034971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.035413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.035441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.035696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.035728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.035894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.035927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.036111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.036141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.036380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.036412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.036797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.036827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.037198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.037229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.037675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.037706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.038030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.038060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.038437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.038466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.038839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.038868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.039220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.039249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.039633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.039662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.040022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.040052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.040446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.040475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.040686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.040716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.040982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.023 [2024-10-08 18:45:00.041012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.023 qpair failed and we were unable to recover it. 00:29:06.023 [2024-10-08 18:45:00.041399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.041429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.041826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.041855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.042326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.042357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.042723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.042753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.043047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.043077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.043351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.043379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.043762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.043790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.044030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.044061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.044450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.044479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.044836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.044864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.045230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.045261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.045620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.045655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.046013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.046044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.046305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.046332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.046580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.046609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.046950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.046993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.047341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.047370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.047732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.047761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.048159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.048190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.048442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.048471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.048828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.048858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.049239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.049269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.049520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.049548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.049897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.049927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.050285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.050315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.050672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.050703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.050960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.051000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.051378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.051407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.051801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.051830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.052198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.052228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.052617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.052646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.052997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.053028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.053336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.053365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.053626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.053657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.054019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.054052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.054455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.054485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.054866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.054895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.055133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.055163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.055542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.055570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.055984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.056014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.056398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.056428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.056783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.056811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.057218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.057248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.057552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.057581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.057907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.057936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.058178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.058212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.058571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.058599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.024 [2024-10-08 18:45:00.058987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.024 [2024-10-08 18:45:00.059017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.024 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.059392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.059423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.059771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.059802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.060262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.060292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.060626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.060672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.061024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.061054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.061375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.061403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.061788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.061816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.062217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.062248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.062618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.062646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.062885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.062913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.063245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.063274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.063673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.063701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.064073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.064104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.064534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.064563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.064955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-08 18:45:00.065021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-08 18:45:00.065288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.065316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.065670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.065699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.066108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.066137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.066514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.066544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.066915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.066944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.067097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.067128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.067475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.067504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.067956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.067997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.068357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.068386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.068757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.068785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.069194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.069224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.069581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.069610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.069864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.069896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.070161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.070194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.070570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.070600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.070851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.070881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.071148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.071179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.071515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.071545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.071916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.071945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.072353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.072384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.072743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.072772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.073028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.073058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.073406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.073435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.073802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.073832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.074129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.074158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.074510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.074539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.074943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.074972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.075345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.075374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.075705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.075738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.076029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.076060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.076413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.076443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.076852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.076880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.077279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.077309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.077665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.077695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.078051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.078082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.078460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.078489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.078852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.078881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.079142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.079176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.079447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.079476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.079898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-08 18:45:00.079928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-08 18:45:00.080283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.080323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.080678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.080707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.081062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.081093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.081452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.081482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.081855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.081883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.082279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.082309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.082657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.082687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.083054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.083083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.083420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.083451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.083704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.083734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.084015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.084045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.084454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.084483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.084851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.084880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.085136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.085166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.085529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.085558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.085921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.085950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.086330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.086359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.086626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.086654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.086916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.086945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.087364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.087394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.087720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.087748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.088016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.088045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.088433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.088462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.088837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.088866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.089248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.089278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.089565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.089595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.089958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.089998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.090269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.090298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.090655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.090691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.091055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.091085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.091453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.091482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.091743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.091772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.092046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.092076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.092441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.092471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.092833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.092862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.093232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.093264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.093515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.093544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.093791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.093819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.094258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.094287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-08 18:45:00.094555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-08 18:45:00.094587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.094993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.095023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.095386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.095416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.095669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.095699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.095936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.095965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.096336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.096365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.096739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.096767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.097129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.097160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.097416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.097445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.097801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.097829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.098188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.098219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.098598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.098627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.098992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.099023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.099408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.099436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.099788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.099817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.100132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.100162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.100535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.100565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.100933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.100963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.101308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.101337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.101704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.101734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.102096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.102125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.102455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.102485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.102862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.102891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.103309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.103339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.103684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.103719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.104090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.104120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.104501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.104530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.104896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.104924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.105282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.105313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.105576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.105610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.106007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.106038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.106329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.106357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.106602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.106632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.106993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.107023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.107369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.107398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.107763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.107792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.108155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.108185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.108428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.108460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.108827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.108856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.109226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-08 18:45:00.109255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-08 18:45:00.109588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.109618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.109983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.110013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.110342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.110371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.110753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.110783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.111145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.111175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.111548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.111577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.111915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.111944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.112311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.112342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.112711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.112740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.113092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.113123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.113492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.113520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.113887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.113915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.114222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.114251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.114498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.114528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.114958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.114998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.115426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.115454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.115787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.115816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.116082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.116113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.116479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.116507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.116812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.116839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.117224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.117253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.117630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.117659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.118018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.118048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.118436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.118464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.118718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.118746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.119086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.119116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.119468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.119497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.119850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.119881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.120260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.120290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.120672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.120705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.121045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.121076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.121458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.121488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.121831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.121860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.122112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.122142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.122517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-08 18:45:00.122545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-08 18:45:00.122806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.122834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.122985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.123015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.123397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.123425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.123793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.123821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.124241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.124271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.124632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.124659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.125048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.125078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.125441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.125470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.125816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.125844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.126206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.126237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.126607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.126637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.127082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.127111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.127333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.127362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.127723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.127751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.128113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.128143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.128501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.128530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.128911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.128940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.129341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.129371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.129743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.129773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.130029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.130060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.130296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.130323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.130694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.130723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.131083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.131114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.131502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.131531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.131789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.131817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.132161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.132191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.132567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.132596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.132968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.133009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.133275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.133304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.133608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.133636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.134001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.134031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.134390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.134420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.134799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.134827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.135227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.135256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.135626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.135669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.136007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.136037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.136267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.136296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.136667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.136696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.137060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.137090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.137366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-08 18:45:00.137395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-08 18:45:00.137657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.137689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.137938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.137968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.138210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.138242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.138664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.138694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.139058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.139088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.139456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.139486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.139891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.139920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.140288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.140318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.140565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.140594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.140942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.140971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.141401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.141430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.141793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.141821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.142206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.142236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.142613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.142641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.142994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.143024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.143389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.143418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.143655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.143683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.143938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.143967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.144193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.144223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.144549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.144587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.144826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.144854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.145200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.145230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.145578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.145607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.145987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.146017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.146381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.146411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.146645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.146677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.147054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.147085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.147457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.147494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.147836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.147866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.148206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.148236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.148601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.148630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.148996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.149024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.149390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.149418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.149777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.149806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.150198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.150235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.150586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.150615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.150960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.151001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.151371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.151399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.151810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.151839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.152162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-08 18:45:00.152194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-08 18:45:00.152548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.152576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.152821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.152852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.153223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.153253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.153478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.153509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.153864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.153893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.154233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.154264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.154633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.154662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.155034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.155064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.155439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.155467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.155665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.155698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.156097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.156128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.156475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.156505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.156879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.156907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.157174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.157208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.157584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.157613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.157972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.158010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.158371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.158401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.158754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.158783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.159184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.159214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.159621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.159651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.160023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.160052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.160424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.160453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.160808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.160837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.161202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.161230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.161558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.161588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.161948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.161985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.162342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.162370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.162732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.162760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.163138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.163168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.163545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.163573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.163811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.163839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.164233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.164263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.164625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.164653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.165015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.165044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.165427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.165462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.165861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.165890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.166232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.166262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.166628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.166657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.167018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.167048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-08 18:45:00.167423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-08 18:45:00.167451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.167828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.167855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.168225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.168254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.168491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.168522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.168760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.168787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.169139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.169169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.169533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.169561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.169925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.169954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.170367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.170397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.170769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.170800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.171153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.171182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.171593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.171621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.171972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.172026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.172373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.172401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.172771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.172799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.173045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.173074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.173441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.173469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.173704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.173732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.174107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.174137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.174388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.174417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.174667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.174695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.175044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.175073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.175431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.175460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.175818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.175846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.176206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.176235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.176654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.176682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.177039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.177069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.177419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.177448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.177801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.177830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.178197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.178226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.178567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.178596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.178964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.179007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.179405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.179434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.179802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.179830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.180200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.180230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.180585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.180614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.180919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.180947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.181326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.181355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.181723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.181751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.182104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.182134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-08 18:45:00.182499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-08 18:45:00.182527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.182885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.182913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.183265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.183295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.183657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.183687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.184058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.184088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.184485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.184513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.184738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.184769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.185092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.185121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.185514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.185542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.185895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.185924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.186286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.186316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.186687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.186716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.187092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.187121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.187495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.187523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.187901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.187930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.188175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.188205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.188576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.188606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.189047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.189076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.189425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.189454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.189842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.189871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.190206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.190236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.190642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.190670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.191066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.191101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.191457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.191487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.191864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.191892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.192250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.192280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.192635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.192663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.193033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.193062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.193395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.193423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.193754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.193782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.194120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.194148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.194492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.194521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.194886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.194914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.195365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.195395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.195639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.195666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.196050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.196081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-08 18:45:00.196466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-08 18:45:00.196494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.196727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.196755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.197119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.197148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.197392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.197424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.197771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.197801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.198164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.198194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.198547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.198576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.198916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.198945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.199295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.199324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.199688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.199717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.200060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.200089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1412481 Killed "${NVMF_APP[@]}" "$@" 00:29:06.305 [2024-10-08 18:45:00.200318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.200350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.200533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.200567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.200905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.200935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:06.305 [2024-10-08 18:45:00.201334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.201364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:06.305 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:06.305 [2024-10-08 18:45:00.201807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.201836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.305 [2024-10-08 18:45:00.202206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.305 [2024-10-08 18:45:00.202234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.202612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.202640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.202879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.202910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.203288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.203318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.203709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.203737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.204106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.204135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.204491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.204519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.204900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.204928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.205375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.205405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.205659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.205687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.205950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.205988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.206413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.206442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.206813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.206842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.207197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.207227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.207589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.207618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.207962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.208000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.208350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.208380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.208731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.208759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.209107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.209138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.209511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.209539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.209858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.209886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-08 18:45:00.210144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-08 18:45:00.210174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.305 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1413313 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1413313 00:29:06.305 [2024-10-08 18:45:00.210547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.210577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:06.306 [2024-10-08 18:45:00.210836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.210866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1413313 ']' 00:29:06.306 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.306 [2024-10-08 18:45:00.211252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.211283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.306 [2024-10-08 18:45:00.211636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.306 [2024-10-08 18:45:00.211666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.306 [2024-10-08 18:45:00.212026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.212058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 18:45:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.306 [2024-10-08 18:45:00.212405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.212434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.212815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.212844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.213215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.213251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.213616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.213646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.213994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.214024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.214412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.214441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.214659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.214688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.214946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.214986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.215368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.215398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.215787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.215816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.216161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.216192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.216406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.216435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.216586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.216620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.216968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.217013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.217371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.217402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.217680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.217711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.217963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.218007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.218246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.218276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.218547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.218577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.218930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.218961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.219210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.219240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.219608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.219638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.219897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.219927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.220306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.220338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.220717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.220748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.221002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.221036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.221378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.221409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.221775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.221806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.222177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.222208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.222458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.222494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.222852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-08 18:45:00.222882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-08 18:45:00.223061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.223092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.223461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.223491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.223855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.223887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.224251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.224285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.224644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.224675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.225022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.225053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.225444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.225475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.225837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.225868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.226125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.226156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.226506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.226535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.226896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.226925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.227288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.227318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.227683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.227713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.228100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.228132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.228521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.228553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.228781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.228810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.229135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.229164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.229431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.229459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.229570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.229599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.230005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.230036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.230380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.230408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.230648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.230677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.231048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.231078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.231234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.231262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.231623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.231651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.232068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.232099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.232365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.232394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.232638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.232667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.232903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.232935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.233463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.233494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.233918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.233947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.234560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.234589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.234940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.234968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.235235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.235264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.235617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.235645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.235864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.235892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.236290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.236320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.236711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.236739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.236993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.237031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.237284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.237313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-08 18:45:00.237431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-08 18:45:00.237459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.237734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.237762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.238158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.238188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.238633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.238661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.238923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.238951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.239316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.239345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.239578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.239610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.239966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.240007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.240464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.240492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.240726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.240754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.241153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.241183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.241407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.241435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.241828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.241857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.242244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.242274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.242541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.242573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.242810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.242838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.243262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.243291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.243557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.243588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.244015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.244046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.244321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.244350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.244593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.244621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.244918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.244945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.245207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.245240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.245615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.245643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.246071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.246101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.246478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.246507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.246768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.246797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.247099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.247129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.247366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.247398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.247762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.247790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.248156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.248186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.248329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.248355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.248659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.248687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.248940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.248969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.249346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.249375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.249705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.249733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.250048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-08 18:45:00.250078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-08 18:45:00.250279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.250308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.250534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.250568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.250990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.251021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.251385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.251414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.251775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.251803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.252055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.252088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.252467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.252496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.252848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.252877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.253367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.253397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.253763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.253792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.254161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.254190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.254448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.254476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.254858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.254887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.255112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.255142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.255520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.255549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.255936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.255965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.256236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.256269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.256654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.256684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.257060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.257091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.257470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.257499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.257892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.257923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.258239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.258269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.258527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.258555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.258943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.258972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.259217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.259245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.259470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.259500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.259854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.259882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.260153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.260182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.260536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.260565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.260952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.260992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.261369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.261398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.261647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.261676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.261786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.261813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.262057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19af0f0 is same with the state(6) to be set 00:29:06.309 [2024-10-08 18:45:00.262451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.262503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.262864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.262896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.263442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.263552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.264010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.264049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.264220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.264250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.264677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.264706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.265060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-08 18:45:00.265091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-08 18:45:00.265460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.265491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.265872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.265902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.266278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.266309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.266682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.266711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.267100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.267131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.267368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.267397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.267782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.267811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.268181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.268212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.268610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.268638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.269031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.269061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.269308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.269336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.269573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.269602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.269968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.270035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.270427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.270456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.270776] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:29:06.310 [2024-10-08 18:45:00.270857] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.310 [2024-10-08 18:45:00.270866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.270897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.271197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.271230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.271608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.271638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.272021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.272052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.272420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.272451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.272822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.272852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.273311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.273344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.273600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.273630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.274024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.274055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.274299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.274331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.274703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.274733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.275104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.275135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.275505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.275535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.275929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.275961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.276370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.276400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.276772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.276801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.277053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.277084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.277456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.277487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.277838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.277868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.278043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.278075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.278444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.278473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.278851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.278882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.279250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.279282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.279540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.279574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-08 18:45:00.279959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-08 18:45:00.280000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.280515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.280547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.280914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.280954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.281367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.281398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.281774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.281805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.282112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.282144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.282507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.282537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.282798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.282829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.283181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.283212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.283676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.283707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.284143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.284174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.284560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.284591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.284961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.285002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.285401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.285433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.285809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.285839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.286080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.286112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.286378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.286409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.286778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.286808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.287193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.287226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.287585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.287615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.287853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.287884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.288252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.288284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.288627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.288659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.289023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.289055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.289278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.289307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.289716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.289746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.290005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.290037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.290412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.290441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.290816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.290846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.291224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.291261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.291628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.291658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.292030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.292061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.292222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.292250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.292638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.292668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.293011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.293041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.293401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.293431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.293790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.293821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.294222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.294253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.294647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.294675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.295028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.295058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.295432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.311 [2024-10-08 18:45:00.295463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.311 qpair failed and we were unable to recover it. 00:29:06.311 [2024-10-08 18:45:00.295847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.295877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.296222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.296251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.296693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.296724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.297151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.297182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.297557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.297586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.297995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.298026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.298413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.298443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.298842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.298872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.299351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.299381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.299688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.299717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.300068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.300098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.300478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.300508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.300877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.300906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.301179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.301210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.301591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.301620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.302015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.302046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.302429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.302459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.302826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.302856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.303173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.303203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.303596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.303627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.304035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.304066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.304353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.304383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.304619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.304647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.305020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.305052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.305443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.305473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.305851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.305880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.306349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.306378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.306749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.306779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.307165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.307196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.307541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.307573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.307943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.307972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.308384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.308421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.308660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.308688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.309022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.309056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.309461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.309491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.309836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.312 [2024-10-08 18:45:00.309865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.312 qpair failed and we were unable to recover it. 00:29:06.312 [2024-10-08 18:45:00.310248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.310279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.310662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.310691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.311058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.311088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.311320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.311348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.311732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.311761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.312125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.312157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.312542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.312571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.312944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.312973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.313382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.313412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.313737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.313767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.314114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.314149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.314532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.314560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.314927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.314956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.315368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.315397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.315750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.315778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.316053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.316087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.316503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.316532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.316800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.316828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.317220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.317250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.317611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.317640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.318033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.318071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.318338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.318366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.318637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.318665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.318944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.318972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.319329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.319358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.319718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.319747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.320132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.320163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.320467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.320495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.320891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.320920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.321192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.321226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.321617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.321647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.321912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.321944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.322220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.322250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.322713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.322741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.323091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.323122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.323507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.323536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.323796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.323824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.324194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.324224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.324575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.324603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.324943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.324972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.313 [2024-10-08 18:45:00.325369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.313 [2024-10-08 18:45:00.325397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.313 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.325782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.325810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.326304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.326334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.326600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.326627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.326880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.326908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.327137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.327167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.327513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.327543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.327922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.327957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.328371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.328401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.328676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.328704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.329101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.329131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.329481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.329510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.329993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.330023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.330394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.330422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.330676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.330704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.330956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.330994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.331366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.331394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.331776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.331804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.332059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.332088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.332465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.332494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.332625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.332652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.332918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.332949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.333324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.333353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.333711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.333739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.334121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.334151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.334527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.334556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.334936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.334965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.335338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.335367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.335627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.335656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.335932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.335960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.336222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.336252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.336713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.336743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.337124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.337154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.337401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.337429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.337781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.337809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.338171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.338201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.338567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.338596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.338829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.338857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.339232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.339262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.339496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.339525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.339794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.339822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.314 qpair failed and we were unable to recover it. 00:29:06.314 [2024-10-08 18:45:00.340037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.314 [2024-10-08 18:45:00.340066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.315 qpair failed and we were unable to recover it. 00:29:06.315 [2024-10-08 18:45:00.340470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.315 [2024-10-08 18:45:00.340499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.315 qpair failed and we were unable to recover it. 00:29:06.315 [2024-10-08 18:45:00.340878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.315 [2024-10-08 18:45:00.340907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.315 qpair failed and we were unable to recover it. 00:29:06.315 [2024-10-08 18:45:00.341255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.315 [2024-10-08 18:45:00.341284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.315 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.341690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.341721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.341991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.342026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.342382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.342412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.342567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.342595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.342894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.342929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.343322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.343352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.343844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.343873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.344238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.344271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.344730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.344760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.345157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.345187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.345451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.345479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.345840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.345869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.346332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.346362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.346717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.346746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.346999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.347030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.347302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.347330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.347690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.347718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.347981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.348011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.348426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.348455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.348762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.587 [2024-10-08 18:45:00.348799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.587 qpair failed and we were unable to recover it. 00:29:06.587 [2024-10-08 18:45:00.349190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.349221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.349578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.349608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.349915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.349944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.350357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.350387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.350747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.350777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.351205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.351234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.351601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.351629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.352000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.352030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.352281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.352308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.352715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.352743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.353087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.353123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.353373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.353401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.353855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.353883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.354008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.354036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.354346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.354374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.354756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.354785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.355023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.355053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.355305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.355333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.355698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.355725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.355955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.355990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.356370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.356399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.356771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.356799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.357162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.357191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.357424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.357453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.357815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.357845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.358205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.358235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.358617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.358645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.359040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.359069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.359437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.359468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.359845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.359875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.360245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.360275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.360678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.360706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.360950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.360992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.361267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.361296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.361713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.361741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.362104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.362135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.362390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.362418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.362679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.362714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.362964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.363004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.363352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.363381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-08 18:45:00.363790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-08 18:45:00.363817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.364189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.364220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.364481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.364509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.364805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.364833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.365185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.365215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.365593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.365623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.365995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.366025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.366326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.589 [2024-10-08 18:45:00.366413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.366442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.366823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.366851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.367225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.367254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.367647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.367676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.368065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.368095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.368473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.368501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.368845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.368873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.369246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.369276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.369655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.369684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.370069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.370098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.370490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.370518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.370899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.370927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.371401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.371431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.371820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.371849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.372212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.372243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.372623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.372652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.373085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.373120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.373521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.373552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.373928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.373960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.374338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.374369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.374612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.374641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.374906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.374935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.375240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.375270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.375683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.375713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.376121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.376153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.376524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.376554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.376798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.376827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.377183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.377214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.377447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.377475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.377807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.377836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.378259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.378289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.378623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.378653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.379016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.379047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-08 18:45:00.379300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-08 18:45:00.379333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.379744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.379773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.380162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.380193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.380461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.380490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.380871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.380901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.381136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.381167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.381554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.381584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.381944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.381983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.382364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.382393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.382776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.382806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.383065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.383094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.383389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.383423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.383824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.383853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.384153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.384183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.384566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.384594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.384968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.385006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.385472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.385501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.385869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.385898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.386156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.386186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.386588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.386619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.386994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.387025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.387374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.387404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.387552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.387582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.387945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.387994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.388380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.388410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.388687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.388716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.389008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.389039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.389456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.389486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.389877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.389906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.390282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.390313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.390673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.390702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.391065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.391096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.391382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.391412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.391771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.391800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.392178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.392208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.392576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.392605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.392997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.393027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.393465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.393495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.393744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.393779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.394162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.394193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.394558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-08 18:45:00.394587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-08 18:45:00.394956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.394994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.395424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.395453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.395821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.395850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.396224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.396254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.396628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.396656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.397005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.397035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.397432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.397461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.397818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.397849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.398111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.398140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.398495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.398524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.398763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.398792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.399223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.399254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.399609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.399640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.400020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.400050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.400432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.400461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.400814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.400844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.401221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.401252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.401601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.401630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.401951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.401993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.402360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.402389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.402762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.402791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.403157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.403186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.403529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.403558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.403917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.403946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.404352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.404382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.404728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.404759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.405030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.405060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.405426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.405455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.405799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.405829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.406190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.406220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.406505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.406533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.406886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.406915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.407201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.407235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.407646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.407676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.408017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.408049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-08 18:45:00.408438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-08 18:45:00.408466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.408725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.408754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.409117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.409148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.409594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.409625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.410003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.410034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.410430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.410459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.410819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.410849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.411305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.411337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.411596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.411625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.411867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.411898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.412330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.412362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.412687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.412717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.413025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.413056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.413458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.413488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.413836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.413866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.414135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.414166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.414561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.414590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.414984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.415016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.415382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.415411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.415777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.415807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.416160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.416191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.416556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.416585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.416949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.416987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.417241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.417271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.417510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.417540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.417914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.417944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.418337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.418368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.418697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.418726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.418927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.418956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.419319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.419350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.419591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.419627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.420001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.420033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.420444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.420475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.420727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.420757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.421095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.421126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.421519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.421549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.421847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.421876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.422255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.422285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.422697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.422726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.423061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.423091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-08 18:45:00.423452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-08 18:45:00.423483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.423717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.423747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.424019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.424050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.424455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.424483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.424776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.424807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.425202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.425233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.425593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.425622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.426062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.426092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.426456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.426485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.426854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.426883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.427238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.427269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.427655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.427684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.428060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.428092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.428445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.428475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.428839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.428868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.429249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.429279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.429668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.429696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.430061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.430097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.430336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.430369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.430732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.430762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.430992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.431021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.431349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.431377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.431727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.431757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.432045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.432076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.432311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.432339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.432716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.432745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.433113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.433144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.433478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.433507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.433872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.433902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.434162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.434192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.434627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.434656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.434907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.434937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.435292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.435323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.435681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.435710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.436046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.436077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.436457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.436486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.436918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.436946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.437326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.437356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.437606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.437635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.438003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.438034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.438404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.438432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-08 18:45:00.438795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-08 18:45:00.438823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.439203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.439233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.439618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.439646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.439826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.439861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.440286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.440317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.440572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.440604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.440859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.440888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.441250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.441280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.441669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.441698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.442072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.442103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.442465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.442495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.442822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.442851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.443231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.443261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.443535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.443563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.443939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.443969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.444260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.444291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.444664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.444695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.444937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.444968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.445417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.445448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.445884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.445913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.446156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.446190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.446538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.446568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.446820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.446849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.447207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.447237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.447604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.447632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.448014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.448044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.448478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.448506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.448833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.448863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.449211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.449240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.449490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.449519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.449893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.449921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.450292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.450322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.450688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.450719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.451061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.451091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.451366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.451395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.451746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.451774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.452043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.452072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.452312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.452341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.452598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.452627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.452839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.452867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.453229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-08 18:45:00.453258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-08 18:45:00.453508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.453536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.453901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.453930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.454280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.454310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.454761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.454796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.455162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.455192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.455598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.455627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.455982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.456013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.456405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.456433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.456791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.456819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.457210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.457239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.457475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.457504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.457852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.457881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.458238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.458267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.458639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.458668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.459052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.459083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.459441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.459470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.459837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.459866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.460254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.460285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.460647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.460639] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.595 [2024-10-08 18:45:00.460677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 [2024-10-08 18:45:00.460691] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.595 [2024-10-08 18:45:00.460702] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is theqpair failed and we were unable to recover it. 00:29:06.595 only 00:29:06.595 [2024-10-08 18:45:00.460713] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.595 [2024-10-08 18:45:00.460719] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.595 [2024-10-08 18:45:00.461046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.461076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.461440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.461470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.461692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.461720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.462081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.462110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.462466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.462495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.462732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.462759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.462847] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:29:06.595 [2024-10-08 18:45:00.463023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.463054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.463082] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:29:06.595 [2024-10-08 18:45:00.463487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.463516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.463475] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:29:06.595 [2024-10-08 18:45:00.463479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:29:06.595 [2024-10-08 18:45:00.463780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.463809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.464044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.464074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.464443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.464472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.464699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.464727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.464994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.465024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.465415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.465445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.465682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.465712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.466092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.466124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.466380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.466410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.466845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.466876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-08 18:45:00.467118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-08 18:45:00.467149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.467498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.467525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.467892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.467922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.468282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.468313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.468633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.468671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.468945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.468988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.469222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.469253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.469523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.469552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.469797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.469826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.470210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.470240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.470603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.470631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.471024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.471055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.471318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.471348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.471700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.471729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.471989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.472018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.472366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.472394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.472760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.472789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.473159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.473196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.473574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.473602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.473996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.474026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.474280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.474309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.474664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.474694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.475099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.475130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.475363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.475392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.475803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.475832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.476180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.476211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.476600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.476629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.476777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.476805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.477064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.477094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.477512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.477540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.477760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.477788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.477914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.477941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.478217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.478249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.478514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.478541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.478790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.478820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.478961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.479002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-08 18:45:00.479166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-08 18:45:00.479195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.479577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.479606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.479841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.479871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.480217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.480255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.480483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.480512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.480827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.480856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.481225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.481256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.481512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.481540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.481805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.481840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.482244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.482276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.482617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.482645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.483042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.483072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.483446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.483476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.483839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.483868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.484235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.484265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.484632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.484661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.484910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.484939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.485202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.485231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.485464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.485493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.485757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.485786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.486007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.486037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.486287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.486315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.486674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.486704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.487055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.487086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.487437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.487467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.487706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.487736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.488002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.488033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.488308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.488336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.488818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.488848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.489109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.489141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.489506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.489536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.489787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.489815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.490159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.490190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.490570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.490601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.490852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.490880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.491285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.491328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.491704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.491735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.492097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.492126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.492517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.492547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.492854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.492888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.493101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.493133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-08 18:45:00.493386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-08 18:45:00.493416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.493662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.493696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.494196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.494234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.494784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.494818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.495241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.495272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.495495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.495524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.495792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.495829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.496205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.496236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.496607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.496638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.496740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.496773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.497258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.497291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.497648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.497679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.498055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.498085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.498202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.498229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.498704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.498846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.499425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.499528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.499815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.499854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.500309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.500412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.500780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.500816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.501052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.501085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.501448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.501477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.501732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.501775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.502066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.502099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.502436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.502465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.502634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.502672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.503051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.503083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.503289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.503318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.503706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.503735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.504094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.504125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.504347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.504380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.504747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.504778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.505004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.505034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.505368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.505398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.505730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.505759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.506131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.506161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.506426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.506460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.506824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.506853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.507111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.507141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.507519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.507548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.507785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.507813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-08 18:45:00.508204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-08 18:45:00.508234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.508553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.508581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.508940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.508969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.509127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.509155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.509393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.509421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.509773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.509801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.510211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.510242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.510613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.510642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.511032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.511063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.511414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.511442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.511724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.511752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.512116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.512146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.512350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.512378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.512652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.512680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.513031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.513060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.513433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.513462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.513819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.513847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.514249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.514278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.514497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.514525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.514735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.514763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.515149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.515179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.515313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.515346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.515724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.515755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.515850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.515878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.516158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.516188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.516396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.516424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.516793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.516822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.517154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.517184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.517552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.517581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.517823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.517851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.518227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.518258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.518628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.518657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.518938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.518966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.519210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.519240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.519498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.519526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.519773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.519807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.520003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.520033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.520245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.520272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.520626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.520654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.520775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.520802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-08 18:45:00.521167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-08 18:45:00.521196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.521568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.521598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.521852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.521881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.522140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.522169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.522590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.522618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.522995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.523024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.523277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.523305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.523509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.523537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.523882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.523911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.524268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.524299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.524431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.524458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.524815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.524844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.525208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.525237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.525478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.525506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.525728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.525756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.526151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.526182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.526535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.526564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.526799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.526826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.527061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.527091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.527372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.527400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.527767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.527795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.528173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.528209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.528391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.528422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.528808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.528836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.529054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.529083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.529358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.529387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.529636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.529664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.529904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.529932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.530321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.530352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.530669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.530697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.531069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.531098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.531459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.531487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.531841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.531872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.532238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.532267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.532637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.532666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.532912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.532941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-08 18:45:00.533050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-08 18:45:00.533081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.533379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.533408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.533772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.533802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.534151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.534180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.534500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.534529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.534899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.534927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.535317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.535347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.535702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.535731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.536130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.536160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.536516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.536544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.536882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.536910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.537329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.537359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.537708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.537738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.538001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.538035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.538438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.538466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.538829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.538858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.539219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.539248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.539608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.539637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.540010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.540040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.540394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.540422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.540784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.540811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.541077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.541107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.541543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.541572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.541905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.541934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.542290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.542319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.542687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.542718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.542981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.543011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.543381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.543409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.543767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.543796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.544166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.544197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.544559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.544588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.544940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.544969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.545371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.545401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.545753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.545782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.546000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.546030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.546364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.546392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.546635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.546663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.547077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.547107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.547476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.547504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.547888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-08 18:45:00.547917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-08 18:45:00.548353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.548383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.548653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.548680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.548838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.548865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.549163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.549192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.549431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.549463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.549817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.549848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.550207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.550237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.550375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.550402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.550668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.550697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.551090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.551119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.551455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.551493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.551841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.551869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.552128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.552165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.552527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.552556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.552901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.552930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.553156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.553186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.553565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.553594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.553949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.553986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.554279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.554307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.554416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.554445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.554792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.554886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.555054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.555096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.555494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.555525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.555647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.555673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.556060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.556092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.556454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.556482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.556857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.556887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.557157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.557186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.557536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.557565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.557940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.557970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.558353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.558383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.558589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.558619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.558861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.558889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.559174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.559204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.559573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.559602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.559986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.560017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.560255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.560284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.560512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.560540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.560928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.560957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.561341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.561377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-08 18:45:00.561620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-08 18:45:00.561652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.562046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.562078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.562430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.562459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.562706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.562738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.563085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.563115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.563485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.563513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.563880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.563908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.564163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.564192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.564449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.564478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.564838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.564868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.565240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.565269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.565490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.565518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.565878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.565907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.566261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.566292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.566654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.566686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.567052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.567083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.567446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.567477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.567816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.567845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.568233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.568263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.568649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.568677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.569050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.569080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.569439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.569467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.569866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.569895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.570307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.570338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.570438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.570465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.570828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.570856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.571135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.571172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.571400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.571428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.571707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.571737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.572089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.572120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.572486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.572514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.572867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.572896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.573172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.573201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.573570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.573598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.573884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.573914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.574276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.574307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.574680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.574709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.575078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.575108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.575350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.575378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.575751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.575779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.576033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-08 18:45:00.576062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-08 18:45:00.576448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.576476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.576656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.576684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.576933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.576961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.577340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.577370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.577739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.577768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.578129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.578160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.578530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.578559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.578814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.578841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.579157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.579188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.579303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.579331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.579622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.579649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.580005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.580035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.580292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.580327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.580758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.580786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.581020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.581049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.581437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.581465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.581771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.581799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.582175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.582205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.582569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.582599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.582963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.583025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.583385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.583414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.583624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.583653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.583893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.583920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.584367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.584397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.587413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.587520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.587693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.587728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.588146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.588181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.588630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.588659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.588875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.588903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.589238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.589269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.589535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.589569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.589812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.589845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.590202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.590231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.590445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.590474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.590836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.590864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.591256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.591285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-08 18:45:00.591658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-08 18:45:00.591687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.592055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.592086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.592513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.592542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.592893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.592923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.593315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.593346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.593695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.593724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.594076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.594107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.594461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.594497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.594890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.594919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.595280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.595310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.595581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.595608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.595841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.595870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.596098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.596128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.596476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.596505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.596852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.596880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.597107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.597136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.597519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.597547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.597801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.597830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.598174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.598204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.598534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.598561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.598940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.598969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.599226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.599263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.599610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.599639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.600009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.600039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.600414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.600442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.600889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.600917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.601175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.601206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.601570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.601598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.601992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.602022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.602367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.602395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.602776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.602804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.603174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.603204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.603571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.603599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.603968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.604004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.604226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.604254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.604516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.604553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.604887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.604916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.605287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.605317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.605677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.605704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.606051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.606081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.606477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-08 18:45:00.606506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-08 18:45:00.606728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.606755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.606919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.606952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.607202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.607232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.607595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.607633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.607856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.607884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.608112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.608142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.608517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.608545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.608967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.609004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.609334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.609362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.609470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.609501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.609593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.609621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.609969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.610006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.610122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.610153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.610424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.610453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.610842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.610872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.611274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.611303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.611686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.611714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.612055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.612086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.612314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.612342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.612433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.612459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.612770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.612798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.613013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.613042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.613405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.613434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.613676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.613704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.614066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.614096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.614434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.614463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.614837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.614868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.615235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.615265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.615629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.615656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.616036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.616065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.616455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.616489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.616830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.616858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.617179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.617208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.617594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.617622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.617995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.618026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.618393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.618421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.618795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.618823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.619057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.619086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.619324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.619361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.619724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.619753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.620134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-08 18:45:00.620164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-08 18:45:00.620454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.620482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.620860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.620888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.621225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.621255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.621601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.621630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.621993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.622023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.622382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.622413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.622777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.622806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.623027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.623056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.623475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.623503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.623846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.623875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.624221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.624252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.624463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.624491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.624729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.624757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.625113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.625143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.625513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.625542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.625918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.625946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.626182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.626211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.626456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.626497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.626855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.626885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.627341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.627371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.627720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.627749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.627919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.627949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.628313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.628345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.628456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.628485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.628869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.628899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.629145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.629176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.629556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.629586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.629962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.630000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.630374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.630403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.630640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.630668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.631034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.631065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.631442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.631471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.631837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.631866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.632237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.632267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.632477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.632506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-08 18:45:00.632892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-08 18:45:00.632921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.633220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.633254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.633595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.633624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.633999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.634030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.634398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.634427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.634676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.634705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.635066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.635096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.635469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.635498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.635874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.635903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.636282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.636313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.636525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.636554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.636949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.636991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.880 [2024-10-08 18:45:00.637340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.880 [2024-10-08 18:45:00.637369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.880 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.637745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.637776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.637995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.638026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.638371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.638399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.638768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.638798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.639140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.639170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.639263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.639290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.639494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.639522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.639887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.639915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.640173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.640203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.640555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.640590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.640823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.640852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.641216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.641247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.641611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.641639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.641892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.641925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.642357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.642388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.642604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.642632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.643003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.643033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.643279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.643308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.643677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.643706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.643916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.643944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.644061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.644090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.644305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.644335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.644782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.644811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.645147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.645179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.645557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.645585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.645958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.645999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.646411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.646441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.646659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.646687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.647017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.647048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.647405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.647434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.647849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.647878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.648229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.648259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.648493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.648522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.648893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.648921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.649169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.649199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.649544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.649573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.649698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.649732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.650091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.650121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.881 [2024-10-08 18:45:00.650216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.881 [2024-10-08 18:45:00.650242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.881 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.650564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.650593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.650959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.650999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.651350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.651378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.651747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.651776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.652144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.652177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.652448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.652476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.652690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.652718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.653090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.653120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.653482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.653511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.653951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.653986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.654362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.654392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.654786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.654816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.655169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.655199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.655504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.655533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.655900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.655929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.656272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.656303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.656669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.656697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.657056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.657087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.657468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.657497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.657864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.657891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.658140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.658170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.658415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.658444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.658671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.658699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.658902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.658930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.659391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.659427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.659770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.659800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.660132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.660163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.660383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.660412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.660767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.660796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.661173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.661204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.661582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.661611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.661948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.661984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.662236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.662266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.662496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.662525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.662919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.662947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.663356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.663385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.663749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.663778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.664021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.664051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.664412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.882 [2024-10-08 18:45:00.664442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.882 qpair failed and we were unable to recover it. 00:29:06.882 [2024-10-08 18:45:00.664655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.664683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.665109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.665140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.665364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.665392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.665785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.665813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.666187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.666217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.666522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.666553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.666920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.666950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.667259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.667288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.667648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.667676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.667950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.667986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.668364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.668392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.668754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.668784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.669158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.669187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.669568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.669597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.669965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.670002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.670354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.670383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.670797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.670826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.671196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.671226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.671584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.671612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.671878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.671907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.672294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.672324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.672618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.672646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.672897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.672926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.673313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.673346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.673705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.673736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.674113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.674144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.674367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.674397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.674767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.674796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.675148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.675182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.675418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.675446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.675726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.675754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.676116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.676146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.676516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.676545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.676919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.676948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.677294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.677323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.677537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.677566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.677940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.677969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.678360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.678388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.678616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.678646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.679045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.883 [2024-10-08 18:45:00.679075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.883 qpair failed and we were unable to recover it. 00:29:06.883 [2024-10-08 18:45:00.679436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.679465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.679839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.679867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.680136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.680166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.680375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.680404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.680665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.680693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.681052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.681081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.681463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.681492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.681729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.681757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.682004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.682035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.682404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.682434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.682754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.682784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.683152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.683182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.683591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.683619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.683995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.684030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.684477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.684506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.684871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.684899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.685133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.685163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.685555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.685583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.686029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.686064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.686438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.686467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.686742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.686770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.687123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.687153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.687519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.687550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.687799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.687830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.688168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.688198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.688406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.688434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.688551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.688578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.688764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.688793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.689319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.689349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.689570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.689601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.690003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.690032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.690269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.690298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.690685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.690714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.691083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.691112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.691483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.691513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.691727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.691756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.692142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.692174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.692563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.692592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.692946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.693017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.693222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.884 [2024-10-08 18:45:00.693251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.884 qpair failed and we were unable to recover it. 00:29:06.884 [2024-10-08 18:45:00.693476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.693517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.693885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.693916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.694276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.694309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.694568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.694595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.694960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.695001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.695396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.695428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.695694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.695721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.696098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.696128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.696364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.696392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.696756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.696785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.697049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.697078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.697458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.697487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.697584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.697610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.697701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.697729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.697950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.697994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.698370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.698399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.698779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.698808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.699043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.699072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.699202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.699228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.699476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.699506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.699914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.699944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.700314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.700347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.700717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.700748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.701004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.701036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.701316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.701346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.701690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.701720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.701950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.702006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.702264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.702292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.702546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.702575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.702778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.702805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.703202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.703232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.703638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.703668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.704014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.704043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.704411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.704439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.885 qpair failed and we were unable to recover it. 00:29:06.885 [2024-10-08 18:45:00.704698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.885 [2024-10-08 18:45:00.704726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.705115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.705145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.705488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.705516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.705861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.705894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.706306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.706336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.706678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.706706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.706922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.706951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.707329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.707360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.707601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.707629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.707996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.708027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.708263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.708292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.708695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.708725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.709108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.709138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.709592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.709623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.710004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.710035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.710256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.710284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.710619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.710647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.710910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.710938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.711328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.711358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.711735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.711762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.711996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.712026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.712389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.712418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.712798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.712829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.713149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.713180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.713554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.713583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.713938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.713968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.714198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.714228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.714596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.714628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.714996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.715026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.715415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.715444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.715810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.715839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.716120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.716151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.716501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.716531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.716827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.716859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.717117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.717156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.717542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.717574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.717790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.717820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.718102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.718134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.718482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.718510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.718643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.718672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.886 [2024-10-08 18:45:00.718901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.886 [2024-10-08 18:45:00.718929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.886 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.719306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.719337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.719627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.719656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.719900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.719928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.720151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.720182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.720465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.720493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.720869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.720898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.721310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.721341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.721705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.721734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.721994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.722024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.722269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.722299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.722528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.722557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.722848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.722876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.723241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.723270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.723536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.723565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.723902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.723930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.724194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.724225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.724576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.724605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.724954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.724994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.725290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.725318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.725786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.725818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.726163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.726199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.726610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.726642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.726893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.726920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.727224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.727254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.727567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.727597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.728053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.728084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.728329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.728357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.728524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.728551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.728780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.728809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.729135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.729163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.729544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.729573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.729956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.729995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.730377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.730405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.730545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.730575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.730924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.730956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.731320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.731349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.731722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.731751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.731984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.732015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.732390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.732419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.732683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.887 [2024-10-08 18:45:00.732711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.887 qpair failed and we were unable to recover it. 00:29:06.887 [2024-10-08 18:45:00.732842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.732871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.733349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.733459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.733827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.733857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.734239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.734268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.734543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.734571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.734995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.735025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.735297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.735325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.735584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.735618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.735859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.735887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.736229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.736259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.736643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.736671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.736942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.736972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.737203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.737232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.737592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.737621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.737861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.737891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.738230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.738262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.738636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.738665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.739045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.739076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.739308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.739337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.739571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.739600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.739960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.739996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.740376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.740407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.740777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.740806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.741029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.741058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.741430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.741458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.741834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.741863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.742232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.742263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.742622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.742651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.743002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.743033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.743434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.743463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.743862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.743891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.744177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.744207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.744425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.744455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.744675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.744705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.744934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.744961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.745219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.745250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.745471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.745499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.745876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.745904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.746139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.746170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.746334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.888 [2024-10-08 18:45:00.746363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.888 qpair failed and we were unable to recover it. 00:29:06.888 [2024-10-08 18:45:00.746763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.746792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.747009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.747039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.747291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.747324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.747697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.747726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.748106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.748136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.748514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.748543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.748902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.748931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.749153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.749183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.749584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.749614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.749737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.749764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.750108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.750138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.750230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.750257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.750631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.750659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.750928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.750957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.751111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.751139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.751396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.751425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.751660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.751691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.752096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.752127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.752510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.752539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.752782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.752809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.753152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.753182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.753400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.753429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.753664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.753692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.753954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.753994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.754416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.754446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.754669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.754698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.754917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.754946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.755214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.755244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.755583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.755612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.755997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.756027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.756395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.756424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.756701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.756728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.756952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.756989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.757448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.757477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.757776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.757803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.758042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.758077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.889 [2024-10-08 18:45:00.758324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.889 [2024-10-08 18:45:00.758353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.889 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.758708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.758737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.758969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.759006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.759216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.759244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.759461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.759489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.759706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.759734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.759990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.760020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.760271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.760302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.760647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.760675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.761046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.761078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.761275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.761303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.761647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.761675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.762092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.762122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.762516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.762545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.762780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.762807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.763084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.763115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.763390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.763418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.763849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.763878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.764091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.764121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.764452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.764479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.764707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.764736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.765163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.765192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.765425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.765453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.765684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.765714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.766078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.766108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.766260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.766290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.766690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.766725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.767015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.767045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.767183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.767213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.767502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.767532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.767880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.767909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.768138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.768168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.768532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.768560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.768788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.768816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.769205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.769234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.769332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.769359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.769737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.769767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.770006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.770036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.770393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.770423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.770789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.770817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.771177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-08 18:45:00.771207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-08 18:45:00.771468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.771500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.771866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.771895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.772248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.772277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.772649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.772679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.772908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.772936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.773309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.773338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.773718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.773746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.773996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.774027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.774163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.774189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.774558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.774587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.774954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.775028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.775250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.775278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.775501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.775529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.775766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.775794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.776216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.776246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.776616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.776645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.776913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.776941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.777230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.777259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.777644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.777672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.778030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.778061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.778269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.778298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.778632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.778660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.778898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.778928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.779297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.779328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.779690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.779718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.779947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.779985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.780426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.780455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.780820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.780848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.781121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.781150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.781404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.781433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.781812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.781842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.782121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.782150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.782505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.782535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.782778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.782806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.783149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.783180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.783497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.783525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.783888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.783917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.784151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.784181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.784408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.784437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.784835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-08 18:45:00.784863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-08 18:45:00.785256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.785286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.785440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.785467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.785891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.785921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.786366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.786396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.786748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.786777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.787030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.787063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.787374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.787402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.787618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.787645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.788011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.788041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.788415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.788444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.788678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.788707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.788951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.788991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.789344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.789372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.789569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.789605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.789807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.789837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.789936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.789965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.790232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.790261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.790519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.790548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.790909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.790939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.791360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.791391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.791732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.791762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.792128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.792158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.792522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.792550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.792923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.792951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.793234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.793266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.793662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.793690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.794059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.794088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.794470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.794498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.794711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.794740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.794940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.794970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.795378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.795407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.795613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.795640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.795871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.795900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.796274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.796304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.796731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.796760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.797111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.797140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.797480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.797509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.797844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.797874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.798224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.798253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.798639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-08 18:45:00.798668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-08 18:45:00.799033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.799068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.799307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.799336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.799703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.799732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.799967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.800008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.800235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.800264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.800628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.800656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.800872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.800901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.801333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.801362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.801609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.801638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.801960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.801997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.802374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.802402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.802619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.802647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.803008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.803039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.803374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.803402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.803791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.803822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.804110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.804139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.804382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.804411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.804655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.804683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.804886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.804915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.805279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.805309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.805583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.805613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.806020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.806050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.806206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.806234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.806469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.806499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.806835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.806865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.807266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.807297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.807556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.807588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.807808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.807847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.808084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.808115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.808482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.808514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.808913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.808942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.809374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.809404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.809625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.809654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.809908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.809938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.810065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.810094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.810301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.810330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.810561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-08 18:45:00.810590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-08 18:45:00.810931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.810961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.811311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.811340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.811710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.811738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.812099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.812130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.812554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.812585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.812933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.812965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.813266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.813296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.813691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.813719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.814136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.814166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.814533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.814561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.814927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.814957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.815380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.815410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.815755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.815783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.815877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.815904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.816049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.816079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.816463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.816491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.816727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.816755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.817146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.817176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.817519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.817551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.817919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.817948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.818307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.818336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.818492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.818520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.818621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.818647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.818849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.818877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.819127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.819156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.819552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.819581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.819709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.819739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.820104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.820133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.820483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.820512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.820888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.820918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.821056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.821084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.821459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.821489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.821853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.821884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.822247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.822276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.822627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.822657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.822989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.823020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.823257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.823286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.823653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.823681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.824033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.824063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.824461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-08 18:45:00.824489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-08 18:45:00.824727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.824756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.825120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.825151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.825511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.825540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.825901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.825929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.826152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.826185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.826549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.826579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.826922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.826950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.827304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.827335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.827579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.827607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.827880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.827911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.828030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.828064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.828600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.828706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.829050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.829092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.829500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.829532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.829770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.829800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.830018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.830052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.830474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.830504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.830714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.830745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.830998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.831031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.831242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.831272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.831496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.831526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.831889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.831918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.832266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.832297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.832657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.832686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.832908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.832938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.833385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.833416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.833783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.833812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.834203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.834234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.834616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.834644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.835032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.835062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.835430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.835460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.835689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.835718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.835996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.836027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.836282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.836311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.836690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.836720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.836972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.837012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.837468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.837498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.837867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.837897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.838266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.838297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.838550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-08 18:45:00.838582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-08 18:45:00.838969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.839010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.839385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.839414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.839785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.839816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.840151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.840181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.840557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.840586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.840809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.840838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.841124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.841154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.841429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.841457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.841801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.841831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.841925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.841953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.842316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.842345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.842580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.842610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.842985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.843016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.843383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.843413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.843785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.843815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.844182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.844213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.844587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.844617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.844849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.844881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.845267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.845314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.845680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.845710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.845963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.846011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.846398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.846429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.846630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.846661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.847018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.847049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.847147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.847175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.847535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.847565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.847939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.847969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.848224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.848254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.848483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.848512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.848884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.848915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.849280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.849311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.849684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.849715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.849943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.849972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.850358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.850387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.850821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.850852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.851227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.851257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.851627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.851657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.852071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.852101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.852469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.852498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.852853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-08 18:45:00.852883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-08 18:45:00.853127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.853157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.853393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.853423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.853661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.853691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.854118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.854149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.854526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.854555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.854930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.854960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.855247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.855278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.855526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.855556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.855684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.855713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.856072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.856102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.856473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.856503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.856719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.856748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.856858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.856885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.857111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.857140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.857362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.857392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.857723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.857755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.858014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.858045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.858299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.858328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.858704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.858740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.858982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.859012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.859404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.859432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.859755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.859786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.860136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.860165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.860534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.860564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.860781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.860809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.861217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.861246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.861619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.861647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.862019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.862048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.862370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.862399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.862645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.862673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.863047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.863076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.863463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.863491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.863767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.863796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.864049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.864083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.864460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.864488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.864872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.864902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.865181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.865210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.865586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.865615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.866012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.866042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.866395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-08 18:45:00.866423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-08 18:45:00.866793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.866822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.867191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.867221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.867341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.867368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.867626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.867653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.867881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.867909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.868181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.868215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.868567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.868595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.868812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.868841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.869094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.869122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.869398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.869426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.869641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.869670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.870054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.870082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.870334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.870362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.870733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.870762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.871185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.871213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.871572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.871603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.871833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.871861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.872083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.872112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.872331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.872366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.872593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.872621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.872882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.872910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.873176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.873208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.873546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.873574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.873947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.873996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.874360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.874388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.874649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.874677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.875019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.875049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.875298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.875326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.875563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.875601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.875969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.876006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.876368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.876396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.876762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.876790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.877042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.877072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.877428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.877457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.877854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-08 18:45:00.877884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-08 18:45:00.878227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.878258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.878634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.878662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.879033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.879062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.879431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.879459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.879691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.879719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.880075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.880105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.880322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.880351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.880451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.880477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.880868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.880898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.881033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.881061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.881290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.881319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.881652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.881680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.882050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.882080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.882462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.882491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.882755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.882784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.883037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.883067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.883427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.883455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.883830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.883858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.884231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.884260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.884475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.884503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.884861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.884890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.885132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.885161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.885576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.885604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.885835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.885872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.886231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.886262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.886494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.886523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.886881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.886909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.887276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.887305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.887672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.887700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.887932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.887960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.888392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.888422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.888640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.888671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.889048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.889078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.889449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.889479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.889864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.889893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.890245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.890274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.890487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.890515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.890915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.890945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.891361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-08 18:45:00.891392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-08 18:45:00.891748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.891779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.892214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.892243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.892539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.892567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.892775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.892804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.893055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.893084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.893462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.893490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.893806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.893834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.894045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.894074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.894414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.894444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.894823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.894851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.895082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.895111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.895494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.895523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.895738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.895766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.896024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.896058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.896343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.896371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.896724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.896755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.897134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.897163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.897537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.897567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.897937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.897965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.898238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.898268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.898638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.898667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.899045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.899076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.899455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.899485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.899858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.899888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.900120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.900161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.900513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.900542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.900927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.900955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.901303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.901332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.901570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.901601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.901960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.902009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.902295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.902324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.902697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.902726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.903072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.903102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.903474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.903503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.903793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.903822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.904193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.904223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.904436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.904465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.904834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.904862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.905216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.905248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-08 18:45:00.905619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-08 18:45:00.905649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.906019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.906052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.906447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.906476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.906855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.906883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.907233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.907264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.907497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.907525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.907895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.907925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.908179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.908209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.908430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.908459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.908823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.908853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.909237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.909268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.909628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.909657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.910052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.910082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.910445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.910473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.910832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.910862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.911258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.911287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.911633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.911662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.911889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.911918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.912180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.912209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.912575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.912603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.912952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.912991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.913358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.913387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.913626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.913657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.914039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.914069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.914208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.914236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.914610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.914652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.915017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.915047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.915533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.915564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.915773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.915801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.916131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.916161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.916492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.916520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.916765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.916795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.917010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.917042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.917310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.917344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.917683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.917713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.918059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.918090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.918367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.918396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.918533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.918560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.918660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.918689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.918818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.918850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-08 18:45:00.919056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-08 18:45:00.919085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.919182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.919208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.919588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.919617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.919985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.920016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.920382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.920411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.920753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.920783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.920995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.921024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.921278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.921307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.921667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.921696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.921959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.921996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.922291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.922320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.922572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.922601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.922992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.923023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.923441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.923471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.923820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.923850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.924115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.924146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.924566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.924596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.925038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.925068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-08 18:45:00.925457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-08 18:45:00.925485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.925866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.925898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.926328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.926360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.926716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.926747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.927113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.927142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.927398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.927429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.927795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.927824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.927954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.927997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.928148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.928176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.928539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.928569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.928940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.928970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.929190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.929221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.929555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.929584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.929947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.929985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.930337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.930366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.930737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-10-08 18:45:00.930766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-10-08 18:45:00.931147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.931178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.931557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.931585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.931828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.931855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.932226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.932255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.932616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.932646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.932905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.932934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.933316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.933347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.933719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.933748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.934134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.934163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.934393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.934420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.934655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.934683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.934883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.934910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.935137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.935166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.935524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.935553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.935935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.935965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.936209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.936242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.936484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.936513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.936658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.936689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.937041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.937072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.937435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.937463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.937838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.937867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.938083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.938111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.938376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.938404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.938619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.938651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.939031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.939060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.939298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.939325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.939566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.939595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.939801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.939829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.940039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.940069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.940167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.940194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.940518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.940546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.940786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.940821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.941238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.941267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.941622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.941650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.942013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.942042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.942484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.942511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.942767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.942797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-10-08 18:45:00.943146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-10-08 18:45:00.943176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.943552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.943580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.943938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.943968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.944336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.944365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.944586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.944613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.945013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.945044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.945425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.945453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.945823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.945851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.946223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.946253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.946532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.946560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.946648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.946676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9280000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.947087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.947193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.947630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.947666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.947898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.947928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.948448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.948553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.948996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.949038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.949432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.949464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.949816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.949846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.950222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.950253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.950592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.950622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.951003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.951034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.951251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.951301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.951573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.951603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.951954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.951998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.952232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.952261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.952464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.952492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.952728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.952758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.953119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.953149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.953358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.953386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.953758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.953786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.954150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.954181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.954407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.954436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.954684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.954712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.955052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.955082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.955371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.955405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.955661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.955690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.956045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.956076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.956483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-10-08 18:45:00.956512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-10-08 18:45:00.956643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.956669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.956950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.956987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.957113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.957141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.957363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.957391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.957550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.957578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.957951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.957991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.958349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.958379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.958764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.958793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.959113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.959145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.959407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.959437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.959778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.959815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.960148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.960178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.960430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.960458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.960810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.960838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.961073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.961102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.961353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.961382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.961830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.961858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.962099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.962127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.962366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.962395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.962802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.962831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.963059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.963089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.963502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.963531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.963759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.963787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.964017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.964048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.964399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.964428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.964822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.964851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.965221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.965251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.965663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.965693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.965919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.965949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.966348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.966378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.966745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.966773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.967005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.967036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.967284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.967313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.967575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.967604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.967972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.968025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.968366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.968396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.968489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.968516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-10-08 18:45:00.968844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-10-08 18:45:00.968879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.969309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.969339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.969605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.969634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.969996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.970026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.970319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.970348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.970721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.970750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.971174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.971205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.971549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.971578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.971859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.971888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.972266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.972296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.972713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.972742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.972972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.973009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.973394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.973423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.973564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.973591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.973862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.973898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.974270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.974299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.974682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.974711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.975078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.975108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.975538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.975566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.975689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.975715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.975960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.976000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.976244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.976274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.976651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.976680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.976896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.976924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.977147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.977178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.977430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.977458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.977719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.977751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.978025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.978055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.978427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.978457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.978830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.978859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.979264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.979294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.979666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.979695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.980062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.980091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.980387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.980415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.980654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.980682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.980985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.981016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.981381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.981410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.981638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-10-08 18:45:00.981666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-10-08 18:45:00.982100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.982130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.982252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.982279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.982492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.982520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.982800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.982831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.983047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.983078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.983455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.983485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.983859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.983888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.984154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.984188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.984581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.984610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.984996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.985027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.985393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.985423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.985818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.985847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.986079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.986107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.986386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.986415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.986803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.986831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.987045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.987075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.987458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.987487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.987876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.987907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.988293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.988324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.988715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.988742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.988985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.989015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.989230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.989259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.989605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.989633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.989864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.989894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.990289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.990319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.990544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.990574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.990714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.990741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.990836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.990865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.991236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.991266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.991633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.991661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.991874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.991908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.992150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-10-08 18:45:00.992180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-10-08 18:45:00.992553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.992583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.992963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.993029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.993438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.993467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.993683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.993711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.994087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.994117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.994504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.994533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.995039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.995070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.995431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.995462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.995830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.995860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.996117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.996148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.996526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.996555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.996925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.996954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.997414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.997444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.997660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.997688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.997942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.997970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.998380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.998409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.998813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.998843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.999128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.999158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.999427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.999457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:00.999815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:00.999844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.000207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.000238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.000591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.000619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.000856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.000885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.001258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.001291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.001643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.001674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.002050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.002085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.002527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.002556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.003010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.003042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.003381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.003409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.003673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.003703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.003962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.004016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.004148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.004179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.004536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.004567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.004808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.004837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.005200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.005231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.005516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.005546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.005920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.005950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.006216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.006246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.006496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.006527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.006921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-10-08 18:45:01.006950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-10-08 18:45:01.007369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.007399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.007653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.007682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.007857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.007888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.008144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.008175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.008534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.008563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.008848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.008877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.009129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.009159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.009546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.009575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.009959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.009995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.010375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.010404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.010772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.010801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.011234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.011265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.011488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.011517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.011929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.011960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.012222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.012254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.012506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.012536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.012894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.012925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.013381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.013412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.013867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.013896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.014155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.014186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.014485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.014515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.014743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.014775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.015065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.015097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.015334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.015364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.015739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.015768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.016168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.016200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.016568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.016598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.016843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.016874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.017245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.017275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.017654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.017684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.018059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.018091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.018372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.018401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.018751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.018781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.018955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.018994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.019210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.019240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.019502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.019533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.019769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.019802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.020161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.020193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.020556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.020587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.020936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-10-08 18:45:01.020968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-10-08 18:45:01.021363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.021394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.021608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.021637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.021968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.022011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.022374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.022403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.022606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.022634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.023021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.023053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.023316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.023352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.023717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.023749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.024093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.024123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.024254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.024283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.024559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.024589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.024900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.024929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.025205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.025236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.025616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.025654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.025838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.025868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.026258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.026288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.026660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.026689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.027060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.027092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.027498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.027528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.027743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.027774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.028121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.028153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.028528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.028557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.028923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.028953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.029346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.029376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.029641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.029670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.030041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.030071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.030449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.030480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.030916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.030946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.031211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.031241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.031625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.031656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.031896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.031928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.032154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.032183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.032544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.032575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.032945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.032987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.033367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.033404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.033653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.033687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.034047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.034079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.034506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.034536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.034778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.034808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-10-08 18:45:01.035608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-10-08 18:45:01.035644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.036007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.036048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.036420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.036450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.036800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.036830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.037050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.037081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.037469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.037498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.037717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.037746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.038160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.038190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.038398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.038431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.038723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.038755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.039120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.039150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.039542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.039572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.039993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.040023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.040249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.040278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.040656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.040685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.041107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.041138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.041368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.041398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.041498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.041527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.041652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.041680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.042054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.042086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.042450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.042480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.042710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.042739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.042985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.043016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.043257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.043286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.043675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.043705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.044067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.044099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.044330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.044359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.044563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.044593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.044837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.044873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.045232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.045263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.045520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.045548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.045909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.045941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.046329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.046359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.046582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.046610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.046994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.047026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-10-08 18:45:01.047277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-10-08 18:45:01.047307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.047647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.047678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.047929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.047959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.048338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.048368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.048708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.048737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.048970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.049013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.049264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.049293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.049689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.049719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.050067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.050098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.050322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.050351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.050600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.050630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.050994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.051026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.051399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.051429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.051697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.051725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.051944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.051984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.052310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.052341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.052706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.052738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.053100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.053131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.053536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.053566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.053925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.053956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.054331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.054362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.054620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.054650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.054890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.054920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.055171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.055201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.055417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.055446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.055835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.055863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.056135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.056167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.056542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.056572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.056969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.057009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.057375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.057406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.057862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.057891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.058256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.058286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.058648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.058678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.059026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.059057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.059302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.059333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.059729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.059759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.060134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.060165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.060444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.060473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.060882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.060911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.061282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.061313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-10-08 18:45:01.061711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-10-08 18:45:01.061741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.062096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.062128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.062511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.062540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.062772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.062802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.062888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.062915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.063151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.063182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.063568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.063597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.063965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.064008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.064372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.064402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.064642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.064670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.064888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.064916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.065282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.065311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.065673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.065703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.065931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.065958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.066203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.066232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.066521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.066552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.066805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.066835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.067211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.067241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.067455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.067484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.067752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.067781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.068143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.068172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.068557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.068592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.068981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.069012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.069251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.069281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.069589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.069618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.069874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.069903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.070264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.070294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.070668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.070698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.071040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.071071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.071345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.071373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.071757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.071785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.072140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.072170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.072547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.072575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.072822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.072851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.073224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.073253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.073643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.073672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.074039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.074067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.074447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.074476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.074849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.074878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.075539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.075570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-10-08 18:45:01.075839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-10-08 18:45:01.075868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.076125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.076155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.076510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.076539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.076777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.076804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.077171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.077202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.077574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.077604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.077997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.078029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.078392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.078421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.078776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.078815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.079217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.079247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.079622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.079651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.079908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.079939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.080084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.080113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.080208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.080234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.080498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.080530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.080890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.080918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.081293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.081323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.081684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.081712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.082071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.082100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.082325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.082353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.082743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.082771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.083144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.083174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.083542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.083572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.083788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.083817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.084215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.084245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.084482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.084510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.084897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.084926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.085298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.085328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.085542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.085571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.085823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.085850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.086023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.086053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.086287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.086315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.086545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.086573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.086859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.086888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.087253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.087284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.087643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.087673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.088048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.088078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.088286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.088315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.088550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.088579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.088980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.089010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-10-08 18:45:01.089436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-10-08 18:45:01.089466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.089686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.089714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.090119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.090151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.090378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.090406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.090659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.090688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.091136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.091166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.091328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.091356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.091790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.091819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.091932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.091962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.092286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.092318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.092662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.092691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.092922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.092950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.093079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.093109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.093606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.093709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.094049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.094115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.094504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.094536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.094931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.094960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.095350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.095380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.095652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.095681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.095961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.096000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.096354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.096385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.096517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.096545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.096989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.097020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.097389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.097419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.097640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.097668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.098009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.098040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.098393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.098423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.098787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.098816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.099077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.099106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.099482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.099510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.099757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.099785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.100147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.100177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.100528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.100565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.100933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.100963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.101362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.101393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.101643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.101677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.101960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.102004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.102380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.102411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.102593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.102622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.103087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-10-08 18:45:01.103118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-10-08 18:45:01.103398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.103426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.103707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.103737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.190 [2024-10-08 18:45:01.104098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.104130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:07.190 [2024-10-08 18:45:01.104364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.104393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:07.190 [2024-10-08 18:45:01.104683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.104711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:07.190 [2024-10-08 18:45:01.104950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.104993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.190 [2024-10-08 18:45:01.105374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.105403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.105651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.105681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.106054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.106085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.106447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.106479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.106912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.106941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.107213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.107243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.107485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.107515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.107894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.107923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.108281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.108320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.108565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.108595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.108945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.108984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.109280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.109310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.109523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.109554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.109912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.109943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9274000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.110480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.110589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.110969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.111025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.111291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.111321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.111592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.111622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.111952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.111994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.112218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.112248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.112605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.112636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.113087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.113119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.113384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.113414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.113776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.113810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.114255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.114286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.114630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.114661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.114994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-10-08 18:45:01.115024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-10-08 18:45:01.115369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.115398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.115621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.115652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.116012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.116042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.116395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.116425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.116679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.116710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.116802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.116829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.117200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.117230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.117651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.117680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.118037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.118067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.118414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.118444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.118791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.118821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.119096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.119128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.119522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.119551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.119899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.119929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.120283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.120313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.120588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.120617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.121009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.121039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.121392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.121421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.121795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.121827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.122191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.122221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.122457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.122486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.122860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.122889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.123287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.123316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.123683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.123712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.124067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.124098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.124477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.124506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.124882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.124912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.125316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.125347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.125718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.125747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.126061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.126091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.126312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.126343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.126712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.126741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.127010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.127042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.127301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.127329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.127560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.127589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.127699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.127727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.128072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.128103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.128493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.128522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.128732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.128762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.129087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.191 [2024-10-08 18:45:01.129117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.191 qpair failed and we were unable to recover it. 00:29:07.191 [2024-10-08 18:45:01.129498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.129528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.129885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.129917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.130251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.130281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.130561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.130590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.130692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.130719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.130920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.130949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.131189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.131218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.131457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.131487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.131701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.131731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.132017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.132047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.132287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.132315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.132594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.132623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.132852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.132880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.133112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.133143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.133525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.133553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.133898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.133937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.134347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.134379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.134729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.134760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.135152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.135184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.135398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.135426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.135805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.135834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.136095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.136124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.136406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.136441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.136824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.136854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.137126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.137157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.137540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.137568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.137938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.137966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.138252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.138284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.138530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.138561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.138919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.138950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.139328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.139357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.139568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.139595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.139961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.140000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.140217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.140246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.140429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.140458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.140833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.140862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.141214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.141250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.141650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.141679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.142063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.142093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.142350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.192 [2024-10-08 18:45:01.142379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.192 qpair failed and we were unable to recover it. 00:29:07.192 [2024-10-08 18:45:01.142792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.142821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.143238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.143270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.143642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.143677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.144018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.144048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.144439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.144468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.144841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.144870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.145259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.145290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.145518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.145548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.145944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.145984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.146239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.146268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.146647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.146675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.146894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.146921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.147175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.147204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.193 [2024-10-08 18:45:01.147463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.147493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.147752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.147781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 wit 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.193 h addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.193 [2024-10-08 18:45:01.148170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.148203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.193 [2024-10-08 18:45:01.148447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.148476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.148844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.148873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.149104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.149134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.149429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.149458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.149697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.149728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.150110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.150142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.150511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.150541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.150763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.150793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.151011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.151041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.151285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.151312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.151698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.151726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.151999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.152032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.152426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.152455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.152908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.152936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.153188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.153219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.153593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.153623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.153968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.154008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.154252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.154280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.154506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.154536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.154788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.154816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.155077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.155110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.155363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.193 [2024-10-08 18:45:01.155392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.193 qpair failed and we were unable to recover it. 00:29:07.193 [2024-10-08 18:45:01.155749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.155778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.155937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.155965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.156233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.156262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.156610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.156640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.156852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.156881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.157278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.157308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.157529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.157557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.157941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.157970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.158352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.158383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.158522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.158549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.158881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.158909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.159072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.159102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.159552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.159581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.159839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.159867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.160234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.160264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.160641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.160669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.160926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.160954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.161081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.161110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.161449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.161478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.161695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.161723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.161971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.162010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.162386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.162415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.162586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.162614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.162850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.162879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.163130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.163160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.163424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.163452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.163707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.163735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.163862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.163890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.164266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.164298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.164689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.164719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.165100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.165137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.165532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.165561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.165792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.165820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.166164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.166194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.166425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.166453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.166845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.194 [2024-10-08 18:45:01.166873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.194 qpair failed and we were unable to recover it. 00:29:07.194 [2024-10-08 18:45:01.167264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.167294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.167517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.167545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.167947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.167984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.168324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.168353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.168738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.168766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.169083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.169112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.169474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.169503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.169870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.169902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.170180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.170210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.170466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.170494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.170741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.170774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.171131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.171160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.171531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.171560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.171810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.171839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.172217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.172246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.172616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.172645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 Malloc0 00:29:07.195 [2024-10-08 18:45:01.173123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.173186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.173433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.173465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.195 [2024-10-08 18:45:01.173912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.173942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:07.195 [2024-10-08 18:45:01.174378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.195 [2024-10-08 18:45:01.174408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.195 [2024-10-08 18:45:01.174706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.174736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.175024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.175056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.175451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.175480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.175746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.175775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.176029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.176059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.176370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.176398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.176776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.176805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.177177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.177207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.177581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.177609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.177993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.178023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.178407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.178435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.178841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.178869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.179297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.179327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.179556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.179586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.179968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.180008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.180288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.195 [2024-10-08 18:45:01.180377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.180405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.180679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.180708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-08 18:45:01.180973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-08 18:45:01.181012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.181359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.181387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.181653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.181681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.182058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.182088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.182509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.182539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.182800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.182828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.183200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.183230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.183576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.183605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.183988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.184017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.184245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.184279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.184660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.184689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.185056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.185085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.185458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.185487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.185711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.185740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.186124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.186154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.186490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.186519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.186909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.186937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.187399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.187429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.187765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.187794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.188166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.188196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.188632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.188663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.188967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.189004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.189257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.189291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.196 [2024-10-08 18:45:01.189708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.189738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.196 [2024-10-08 18:45:01.189911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.189939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.196 [2024-10-08 18:45:01.190257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.190287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.196 [2024-10-08 18:45:01.190544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.190573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.191004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.191035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.191401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.191431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.191777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.191804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.192148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.192179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.192541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.192571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.192816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.192846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.193223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.193253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.193472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.193500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.193878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.193907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.194145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.194174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.194602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.194632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.194969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-08 18:45:01.195040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-08 18:45:01.195284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.195313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.195648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.195676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.195887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.195919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.196285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.196316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.196682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.196711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.196964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.197005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.197390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.197420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.197801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.197830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.198211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.198242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.198467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.198497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.198862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.198890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.199128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.199157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.199467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.199496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.199861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.199891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.200127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.200158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.200515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.200544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.200790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.200819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.200960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.200998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.201356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.201386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.197 [2024-10-08 18:45:01.201763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.201794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.197 [2024-10-08 18:45:01.202066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.202095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.197 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.197 [2024-10-08 18:45:01.202494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.202524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.202731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.202759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.202917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.202945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.203146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.203176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.203555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.203584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.203826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.203854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.204203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.204234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.204483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.204512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.204899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.204928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.205180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.205210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.205587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.205616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.206089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.206120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.206325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.206354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.206738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.206774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.207015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.207046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.207293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.207322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.207550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.207579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-08 18:45:01.207959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-08 18:45:01.207998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.208464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.208493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.208879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.208908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.209115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.209144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.209601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.209631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.209995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.210028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.210417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.210446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.210815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.210844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.211208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.211239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.211444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.211474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.211840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.211871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.212118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.212151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.212404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.212432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.212820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.212850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.213300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.213331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.198 [2024-10-08 18:45:01.213705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.213736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.198 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.198 [2024-10-08 18:45:01.214207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.214238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.198 [2024-10-08 18:45:01.214453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.214483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.214825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.214854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.215082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.215116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.215518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.215548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.215778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.215809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.216211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.216243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.216613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.216644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.217041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.217072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.217445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.217475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.217708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.217738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.217994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.218025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.218402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.218433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.218796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.218826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.219034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.219065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.219319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.219352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.219712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.219743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.220122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.220153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.220608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-08 18:45:01.220638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b1550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-08 18:45:01.220673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.461 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.461 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:07.461 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.461 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.461 [2024-10-08 18:45:01.231575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.461 [2024-10-08 18:45:01.231707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.461 [2024-10-08 18:45:01.231757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.461 [2024-10-08 18:45:01.231782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.461 [2024-10-08 18:45:01.231802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.461 [2024-10-08 18:45:01.231855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.461 qpair failed and we were unable to recover it. 00:29:07.461 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.461 18:45:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1412618 00:29:07.461 [2024-10-08 18:45:01.241473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.461 [2024-10-08 18:45:01.241572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.461 [2024-10-08 18:45:01.241602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.461 [2024-10-08 18:45:01.241616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.461 [2024-10-08 18:45:01.241630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.461 [2024-10-08 18:45:01.241659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.461 qpair failed and we were unable to recover it. 00:29:07.461 [2024-10-08 18:45:01.251345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.461 [2024-10-08 18:45:01.251429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.461 [2024-10-08 18:45:01.251452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.461 [2024-10-08 18:45:01.251462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.461 [2024-10-08 18:45:01.251473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.461 [2024-10-08 18:45:01.251494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.461 qpair failed and we were unable to recover it. 00:29:07.461 [2024-10-08 18:45:01.261469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.461 [2024-10-08 18:45:01.261548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.461 [2024-10-08 18:45:01.261568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.461 [2024-10-08 18:45:01.261585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.461 [2024-10-08 18:45:01.261593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.461 [2024-10-08 18:45:01.261611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.461 qpair failed and we were unable to recover it. 00:29:07.461 [2024-10-08 18:45:01.271307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.461 [2024-10-08 18:45:01.271384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.461 [2024-10-08 18:45:01.271404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.461 [2024-10-08 18:45:01.271412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.461 [2024-10-08 18:45:01.271419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.271436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.281402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.281475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.281492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.281500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.281506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.281522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.291436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.291507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.291525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.291532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.291539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.291554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.301454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.301524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.301544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.301551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.301557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.301575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.311518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.311592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.311610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.311618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.311626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.311641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.321544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.321607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.321625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.321633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.321639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.321656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.331568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.331639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.331676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.331686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.331694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.331717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.341589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.341663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.341701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.341711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.341720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.341745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.351635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.351703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.351724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.351738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.351745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.351763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.361613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.361672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.361690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.361698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.361704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.361722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.371657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.371734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.371752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.371759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.371766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.371782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.381697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.381769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.381786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.381793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.381800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.381816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.391629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.391702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.391720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.391727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.391733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.391749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.401729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.401797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.401819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.401827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.401833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.401850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.462 [2024-10-08 18:45:01.411781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.462 [2024-10-08 18:45:01.411884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.462 [2024-10-08 18:45:01.411902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.462 [2024-10-08 18:45:01.411910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.462 [2024-10-08 18:45:01.411917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.462 [2024-10-08 18:45:01.411932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.462 qpair failed and we were unable to recover it. 00:29:07.463 [2024-10-08 18:45:01.421811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.463 [2024-10-08 18:45:01.421882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.463 [2024-10-08 18:45:01.421899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.463 [2024-10-08 18:45:01.421907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.463 [2024-10-08 18:45:01.421914] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.463 [2024-10-08 18:45:01.421930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.463 qpair failed and we were unable to recover it. 00:29:07.463 [2024-10-08 18:45:01.431931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.463 [2024-10-08 18:45:01.432009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.463 [2024-10-08 18:45:01.432028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.463 [2024-10-08 18:45:01.432035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.463 [2024-10-08 18:45:01.432042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.463 [2024-10-08 18:45:01.432059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.463 qpair failed and we were unable to recover it. 00:29:07.463 [2024-10-08 18:45:01.441867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.463 [2024-10-08 18:45:01.441931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.463 [2024-10-08 18:45:01.441955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.463 [2024-10-08 18:45:01.441963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.463 [2024-10-08 18:45:01.441970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.463 [2024-10-08 18:45:01.441995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.463 qpair failed and we were unable to recover it. 00:29:07.463 [2024-10-08 18:45:01.451890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.463 [2024-10-08 18:45:01.451950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.463 [2024-10-08 18:45:01.451972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.463 [2024-10-08 18:45:01.451985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.463 [2024-10-08 18:45:01.451992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.463 [2024-10-08 18:45:01.452009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.463 qpair failed and we were unable to recover it. 00:29:07.463 [2024-10-08 18:45:01.461951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.463 [2024-10-08 18:45:01.462034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.463 [2024-10-08 18:45:01.462055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.463 [2024-10-08 18:45:01.462062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.463 [2024-10-08 18:45:01.462069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.463 [2024-10-08 18:45:01.462086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.463 qpair failed and we were unable to recover it. 00:29:07.463 [2024-10-08 18:45:01.472360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.463 [2024-10-08 18:45:01.472439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.463 [2024-10-08 18:45:01.472457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.463 [2024-10-08 18:45:01.472466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.463 [2024-10-08 18:45:01.472473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.463 [2024-10-08 18:45:01.472489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.463 qpair failed and we were unable to recover it. 00:29:07.463 [2024-10-08 18:45:01.481955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.463 [2024-10-08 18:45:01.482041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.463 [2024-10-08 18:45:01.482059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.463 [2024-10-08 18:45:01.482066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.463 [2024-10-08 18:45:01.482073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.463 [2024-10-08 18:45:01.482089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.463 qpair failed and we were unable to recover it. 00:29:07.463 [2024-10-08 18:45:01.491943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.463 [2024-10-08 18:45:01.492010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.463 [2024-10-08 18:45:01.492028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.463 [2024-10-08 18:45:01.492036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.463 [2024-10-08 18:45:01.492042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.463 [2024-10-08 18:45:01.492058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.463 qpair failed and we were unable to recover it. 00:29:07.463 [2024-10-08 18:45:01.501966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.463 [2024-10-08 18:45:01.502052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.463 [2024-10-08 18:45:01.502075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.463 [2024-10-08 18:45:01.502083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.463 [2024-10-08 18:45:01.502090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.463 [2024-10-08 18:45:01.502108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.463 qpair failed and we were unable to recover it. 00:29:07.463 [2024-10-08 18:45:01.511967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.463 [2024-10-08 18:45:01.512051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.463 [2024-10-08 18:45:01.512070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.463 [2024-10-08 18:45:01.512077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.463 [2024-10-08 18:45:01.512084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.463 [2024-10-08 18:45:01.512101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.463 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.522118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.522234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.522252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.522261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.522268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.522284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.532132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.532201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.532231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.532240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.532246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.532262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.542170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.542235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.542252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.542260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.542267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.542282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.552253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.552326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.552343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.552351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.552357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.552372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.562157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.562222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.562239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.562247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.562253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.562268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.572293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.572397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.572414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.572421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.572428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.572449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.582299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.582367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.582384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.582391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.582398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.582413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.592363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.592443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.592461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.592468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.592475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.592490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.602294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.602358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.602375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.602382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.602388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.602407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.612325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.612385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.612401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.612409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.612415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.612430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.622345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.622430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.622448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.622456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.622462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.726 [2024-10-08 18:45:01.622476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.726 qpair failed and we were unable to recover it. 00:29:07.726 [2024-10-08 18:45:01.632409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.726 [2024-10-08 18:45:01.632467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.726 [2024-10-08 18:45:01.632481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.726 [2024-10-08 18:45:01.632488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.726 [2024-10-08 18:45:01.632495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.632509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.642420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.642476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.642491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.642498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.642504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.642522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.652342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.652441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.652456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.652463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.652469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.652483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.662484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.662539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.662553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.662560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.662566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.662583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.672504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.672560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.672574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.672581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.672587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.672600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.682528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.682580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.682593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.682600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.682607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.682620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.692549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.692604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.692617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.692624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.692631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.692644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.702588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.702641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.702655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.702663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.702669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.702683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.712609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.712669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.712687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.712694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.712700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.712713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.722641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.722696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.722709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.722716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.722722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.722735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.732675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.732728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.732741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.732748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.732754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.732767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.742679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.742741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.742754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.742760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.742767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.742780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.752748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.752828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.752841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.752848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.752855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.752871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.762753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.762831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.762844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.762851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.762857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.762871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.727 [2024-10-08 18:45:01.772794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.727 [2024-10-08 18:45:01.772887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.727 [2024-10-08 18:45:01.772900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.727 [2024-10-08 18:45:01.772907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.727 [2024-10-08 18:45:01.772913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.727 [2024-10-08 18:45:01.772926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.727 qpair failed and we were unable to recover it. 00:29:07.989 [2024-10-08 18:45:01.782696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.989 [2024-10-08 18:45:01.782754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.989 [2024-10-08 18:45:01.782767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.989 [2024-10-08 18:45:01.782774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.989 [2024-10-08 18:45:01.782780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.989 [2024-10-08 18:45:01.782794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.989 qpair failed and we were unable to recover it. 00:29:07.989 [2024-10-08 18:45:01.792849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.989 [2024-10-08 18:45:01.792902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.989 [2024-10-08 18:45:01.792915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.989 [2024-10-08 18:45:01.792923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.989 [2024-10-08 18:45:01.792929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.989 [2024-10-08 18:45:01.792942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.989 qpair failed and we were unable to recover it. 00:29:07.989 [2024-10-08 18:45:01.802879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.989 [2024-10-08 18:45:01.802929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.989 [2024-10-08 18:45:01.802946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.989 [2024-10-08 18:45:01.802953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.989 [2024-10-08 18:45:01.802960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.989 [2024-10-08 18:45:01.802977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.989 qpair failed and we were unable to recover it. 00:29:07.989 [2024-10-08 18:45:01.812902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.989 [2024-10-08 18:45:01.812956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.989 [2024-10-08 18:45:01.812969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.989 [2024-10-08 18:45:01.812980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.989 [2024-10-08 18:45:01.812987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.989 [2024-10-08 18:45:01.813000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.989 qpair failed and we were unable to recover it. 00:29:07.989 [2024-10-08 18:45:01.822933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.989 [2024-10-08 18:45:01.822993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.989 [2024-10-08 18:45:01.823007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.989 [2024-10-08 18:45:01.823015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.989 [2024-10-08 18:45:01.823021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.989 [2024-10-08 18:45:01.823039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.989 qpair failed and we were unable to recover it. 00:29:07.989 [2024-10-08 18:45:01.832926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.989 [2024-10-08 18:45:01.832986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.989 [2024-10-08 18:45:01.833000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.989 [2024-10-08 18:45:01.833007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.989 [2024-10-08 18:45:01.833013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.989 [2024-10-08 18:45:01.833027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.989 qpair failed and we were unable to recover it. 00:29:07.989 [2024-10-08 18:45:01.842940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.989 [2024-10-08 18:45:01.842998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.989 [2024-10-08 18:45:01.843012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.989 [2024-10-08 18:45:01.843019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.989 [2024-10-08 18:45:01.843025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.989 [2024-10-08 18:45:01.843042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.989 qpair failed and we were unable to recover it. 00:29:07.989 [2024-10-08 18:45:01.853014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.853061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.853075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.853082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.853088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.853101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.863043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.863102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.863115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.863122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.863129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.863142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.873074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.873175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.873189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.873196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.873202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.873216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.883112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.883168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.883181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.883188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.883195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.883208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.893078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.893131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.893147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.893154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.893161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.893174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.903165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.903229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.903243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.903250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.903257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.903270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.913168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.913262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.913276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.913283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.913289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.913302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.923210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.923267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.923280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.923287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.923293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.923307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.933188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.933243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.933256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.933263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.933273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.933287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.943270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.943329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.943342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.943349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.943356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.943369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.953305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.953357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.953370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.953377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.953383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.953396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.963319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.963372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.963385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.963393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.963399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.963412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.973318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.973365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.973380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.973387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.973393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.973407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.983356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.983417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.990 [2024-10-08 18:45:01.983430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.990 [2024-10-08 18:45:01.983437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.990 [2024-10-08 18:45:01.983444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.990 [2024-10-08 18:45:01.983457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.990 qpair failed and we were unable to recover it. 00:29:07.990 [2024-10-08 18:45:01.993406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.990 [2024-10-08 18:45:01.993466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.991 [2024-10-08 18:45:01.993480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.991 [2024-10-08 18:45:01.993486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.991 [2024-10-08 18:45:01.993493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.991 [2024-10-08 18:45:01.993506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.991 qpair failed and we were unable to recover it. 00:29:07.991 [2024-10-08 18:45:02.003284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.991 [2024-10-08 18:45:02.003339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.991 [2024-10-08 18:45:02.003353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.991 [2024-10-08 18:45:02.003360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.991 [2024-10-08 18:45:02.003366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.991 [2024-10-08 18:45:02.003380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.991 qpair failed and we were unable to recover it. 00:29:07.991 [2024-10-08 18:45:02.013479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.991 [2024-10-08 18:45:02.013534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.991 [2024-10-08 18:45:02.013548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.991 [2024-10-08 18:45:02.013555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.991 [2024-10-08 18:45:02.013561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.991 [2024-10-08 18:45:02.013574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.991 qpair failed and we were unable to recover it. 00:29:07.991 [2024-10-08 18:45:02.023377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.991 [2024-10-08 18:45:02.023433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.991 [2024-10-08 18:45:02.023447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.991 [2024-10-08 18:45:02.023455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.991 [2024-10-08 18:45:02.023465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.991 [2024-10-08 18:45:02.023478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.991 qpair failed and we were unable to recover it. 00:29:07.991 [2024-10-08 18:45:02.033397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.991 [2024-10-08 18:45:02.033457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.991 [2024-10-08 18:45:02.033470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.991 [2024-10-08 18:45:02.033477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.991 [2024-10-08 18:45:02.033483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.991 [2024-10-08 18:45:02.033496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.991 qpair failed and we were unable to recover it. 00:29:07.991 [2024-10-08 18:45:02.043404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.991 [2024-10-08 18:45:02.043452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.991 [2024-10-08 18:45:02.043465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.991 [2024-10-08 18:45:02.043472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.991 [2024-10-08 18:45:02.043479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:07.991 [2024-10-08 18:45:02.043492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.991 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.053570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.053629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.053642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.053649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.053656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.053669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.063597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.063650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.063663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.063671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.063677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.063690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.073609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.073680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.073693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.073700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.073707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.073720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.083628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.083700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.083713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.083720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.083727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.083740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.093658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.093746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.093760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.093767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.093773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.093786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.103690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.103742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.103756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.103763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.103769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.103783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.113727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.113783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.113797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.113804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.113813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.113827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.123741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.123790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.123803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.123810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.123816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.123829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.133777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.133854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.133867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.133874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.133880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.133893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.143781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.143843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.143858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.143865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.143871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.143888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.153837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.153895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.153909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.153916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.153922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.153936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.163851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.163941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.163955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.163962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.163968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.163986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.173923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.174007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.174021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.174028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.174034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.174048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.183919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.183977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.183990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.183997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.184003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.184017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.193963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.194023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.194036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.194043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.194050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.194063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.203836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.203890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.203903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.203910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.203920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.203934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.214002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.214051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.214064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.214072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.214078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.214091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.224035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.224122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.224135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.224142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.224148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.224162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.234065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.234121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.234134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.234141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.234148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.234161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.244084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.253 [2024-10-08 18:45:02.244140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.253 [2024-10-08 18:45:02.244153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.253 [2024-10-08 18:45:02.244160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.253 [2024-10-08 18:45:02.244167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.253 [2024-10-08 18:45:02.244180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.253 qpair failed and we were unable to recover it. 00:29:08.253 [2024-10-08 18:45:02.254092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.254 [2024-10-08 18:45:02.254144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.254 [2024-10-08 18:45:02.254158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.254 [2024-10-08 18:45:02.254165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.254 [2024-10-08 18:45:02.254171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.254 [2024-10-08 18:45:02.254185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-08 18:45:02.264152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.254 [2024-10-08 18:45:02.264207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.254 [2024-10-08 18:45:02.264220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.254 [2024-10-08 18:45:02.264227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.254 [2024-10-08 18:45:02.264234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.254 [2024-10-08 18:45:02.264247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-08 18:45:02.274161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.254 [2024-10-08 18:45:02.274216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.254 [2024-10-08 18:45:02.274229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.254 [2024-10-08 18:45:02.274236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.254 [2024-10-08 18:45:02.274243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.254 [2024-10-08 18:45:02.274256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-08 18:45:02.284101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.254 [2024-10-08 18:45:02.284151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.254 [2024-10-08 18:45:02.284165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.254 [2024-10-08 18:45:02.284172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.254 [2024-10-08 18:45:02.284178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.254 [2024-10-08 18:45:02.284191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-08 18:45:02.294219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.254 [2024-10-08 18:45:02.294269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.254 [2024-10-08 18:45:02.294283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.254 [2024-10-08 18:45:02.294294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.254 [2024-10-08 18:45:02.294300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.254 [2024-10-08 18:45:02.294313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.254 [2024-10-08 18:45:02.304292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.254 [2024-10-08 18:45:02.304349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.254 [2024-10-08 18:45:02.304362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.254 [2024-10-08 18:45:02.304369] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.254 [2024-10-08 18:45:02.304375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.254 [2024-10-08 18:45:02.304388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.254 qpair failed and we were unable to recover it. 00:29:08.515 [2024-10-08 18:45:02.314162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.515 [2024-10-08 18:45:02.314233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.515 [2024-10-08 18:45:02.314246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.515 [2024-10-08 18:45:02.314253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.515 [2024-10-08 18:45:02.314260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.515 [2024-10-08 18:45:02.314273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-10-08 18:45:02.324309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.515 [2024-10-08 18:45:02.324361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.515 [2024-10-08 18:45:02.324374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.515 [2024-10-08 18:45:02.324381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.515 [2024-10-08 18:45:02.324387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.515 [2024-10-08 18:45:02.324400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-10-08 18:45:02.334197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.515 [2024-10-08 18:45:02.334245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.515 [2024-10-08 18:45:02.334259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.515 [2024-10-08 18:45:02.334266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.515 [2024-10-08 18:45:02.334272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.515 [2024-10-08 18:45:02.334285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-10-08 18:45:02.344332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.515 [2024-10-08 18:45:02.344388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.515 [2024-10-08 18:45:02.344401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.515 [2024-10-08 18:45:02.344408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.515 [2024-10-08 18:45:02.344415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.515 [2024-10-08 18:45:02.344428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-10-08 18:45:02.354400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.515 [2024-10-08 18:45:02.354454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.515 [2024-10-08 18:45:02.354468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.515 [2024-10-08 18:45:02.354475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.515 [2024-10-08 18:45:02.354481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.515 [2024-10-08 18:45:02.354494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-10-08 18:45:02.364413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.515 [2024-10-08 18:45:02.364461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.515 [2024-10-08 18:45:02.364474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.515 [2024-10-08 18:45:02.364481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.515 [2024-10-08 18:45:02.364488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.515 [2024-10-08 18:45:02.364500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-10-08 18:45:02.374431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.515 [2024-10-08 18:45:02.374479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.515 [2024-10-08 18:45:02.374492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.515 [2024-10-08 18:45:02.374499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.515 [2024-10-08 18:45:02.374505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.515 [2024-10-08 18:45:02.374518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-10-08 18:45:02.384463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.515 [2024-10-08 18:45:02.384517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.515 [2024-10-08 18:45:02.384530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.384540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.384547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.384560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.394508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.394560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.394574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.394581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.394587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.394600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.404508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.404587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.404600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.404607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.404614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.404627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.414544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.414592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.414606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.414613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.414619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.414632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.424596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.424665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.424679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.424686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.424693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.424706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.434615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.434674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.434688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.434695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.434701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.434714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.444629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.444681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.444694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.444702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.444708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.444721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.454621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.454677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.454690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.454697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.454703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.454716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.464581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.464637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.464650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.464657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.464664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.464677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.474635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.474692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.474706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.474716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.474722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.474736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.484748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.484800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.484814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.484822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.484828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.484841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.494764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.494812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.494828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.494836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.494842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.494858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.504808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.504905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.504919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.504926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.504933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.504946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.514838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.514896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.514910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.514917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.514923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.514937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.524851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.524903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.524916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.524923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.524930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.524943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.534883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.534936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.534949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.534956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.534963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.534979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.544901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.544959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.544972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.544983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.544990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.545004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.554912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.554966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.554983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.554990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.554996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.555009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-10-08 18:45:02.564949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.516 [2024-10-08 18:45:02.565007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.516 [2024-10-08 18:45:02.565022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.516 [2024-10-08 18:45:02.565033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.516 [2024-10-08 18:45:02.565039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.516 [2024-10-08 18:45:02.565054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.575030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.575081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.575095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.575103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.575109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.575123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.585060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.585157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.585170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.585177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.585183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.585197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.595039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.595100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.595114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.595121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.595127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.595141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.604990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.605049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.605063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.605070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.605076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.605090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.615101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.615156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.615170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.615177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.615184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.615197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.625156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.625229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.625242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.625249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.625255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.625269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.635160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.635218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.635231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.635238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.635244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.635257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.645187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.645238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.645251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.645258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.645264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.645277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.655217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.655299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.655316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.655323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.655329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.655342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.665269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.665330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.665344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.665351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.665357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.665370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.675311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.675369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.675383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.675390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.675396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.675409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.685309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.685365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.685379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.685386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.685392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.685406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-10-08 18:45:02.695324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.779 [2024-10-08 18:45:02.695415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.779 [2024-10-08 18:45:02.695429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.779 [2024-10-08 18:45:02.695436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.779 [2024-10-08 18:45:02.695442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.779 [2024-10-08 18:45:02.695456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.705357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.705411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.705426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.705433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.705439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.705452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.715398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.715459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.715473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.715480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.715486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.715500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.725408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.725463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.725478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.725485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.725491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.725505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.735437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.735494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.735508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.735515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.735521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.735535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.745477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.745539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.745557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.745564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.745571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.745585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.755525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.755589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.755603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.755611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.755617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.755632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.765570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.765644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.765660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.765668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.765674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.765689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.775570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.775627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.775644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.775651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.775658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.775673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.785632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.785698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.785715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.785723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.785729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.785751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.795689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.795787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.795823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.795832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.795840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.795862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.805586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.805695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.805716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.805724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.805730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.805749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.815709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.815820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.815842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.815850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.815857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.815875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-10-08 18:45:02.825631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.780 [2024-10-08 18:45:02.825695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.780 [2024-10-08 18:45:02.825715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.780 [2024-10-08 18:45:02.825723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.780 [2024-10-08 18:45:02.825729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:08.780 [2024-10-08 18:45:02.825747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.780 qpair failed and we were unable to recover it. 00:29:09.042 [2024-10-08 18:45:02.835761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.042 [2024-10-08 18:45:02.835833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.042 [2024-10-08 18:45:02.835858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.042 [2024-10-08 18:45:02.835866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.042 [2024-10-08 18:45:02.835872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.042 [2024-10-08 18:45:02.835889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.042 qpair failed and we were unable to recover it. 00:29:09.042 [2024-10-08 18:45:02.845751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.042 [2024-10-08 18:45:02.845815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.042 [2024-10-08 18:45:02.845836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.042 [2024-10-08 18:45:02.845843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.042 [2024-10-08 18:45:02.845853] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.042 [2024-10-08 18:45:02.845871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.042 qpair failed and we were unable to recover it. 00:29:09.042 [2024-10-08 18:45:02.855809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.042 [2024-10-08 18:45:02.855870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.042 [2024-10-08 18:45:02.855890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.042 [2024-10-08 18:45:02.855897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.042 [2024-10-08 18:45:02.855904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.042 [2024-10-08 18:45:02.855920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.042 qpair failed and we were unable to recover it. 00:29:09.042 [2024-10-08 18:45:02.865856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.042 [2024-10-08 18:45:02.865925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.042 [2024-10-08 18:45:02.865942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.042 [2024-10-08 18:45:02.865950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.042 [2024-10-08 18:45:02.865956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.042 [2024-10-08 18:45:02.865972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.042 qpair failed and we were unable to recover it. 00:29:09.042 [2024-10-08 18:45:02.875929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.042 [2024-10-08 18:45:02.876009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.042 [2024-10-08 18:45:02.876028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.042 [2024-10-08 18:45:02.876035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.042 [2024-10-08 18:45:02.876042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.042 [2024-10-08 18:45:02.876065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.042 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.885919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.885984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.886003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.886011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.886020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.886036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.895959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.896029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.896048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.896055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.896062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.896079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.905903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.905988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.906006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.906013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.906020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.906036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.916019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.916093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.916112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.916121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.916127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.916143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.926040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.926111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.926137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.926145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.926151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.926167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.936128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.936244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.936262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.936270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.936276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.936292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.946131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.946196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.946213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.946220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.946227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.946243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.956190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.956262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.956280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.956288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.956294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.956311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.966205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.966273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.966290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.966297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.966303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.966325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.976226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.976291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.976308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.976316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.976322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.976338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.986222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.986290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.986307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.986314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.986321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.986337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:02.996326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:02.996395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:02.996412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:02.996420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:02.996426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:02.996442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:03.006297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:03.006368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:03.006386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:03.006393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:03.006399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:03.006415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:03.016339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:03.016417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:03.016446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:03.016453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:03.016460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:03.016476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:03.026358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:03.026430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:03.026448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:03.026455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:03.026462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:03.026478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:03.036429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:03.036501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:03.036517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:03.036525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:03.036532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:03.036547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:03.046393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:03.046462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:03.046482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:03.046490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:03.046497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:03.046513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:03.056450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:03.056512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:03.056530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:03.056538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:03.056544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:03.056566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:03.066499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:03.066567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:03.066584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:03.066592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:03.066599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:03.066615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:03.076551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:03.076630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:03.076648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:03.076656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:03.076662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:03.076678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:03.086578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:03.086648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:03.086667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:03.086675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:03.086681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:03.086698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.043 [2024-10-08 18:45:03.096461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.043 [2024-10-08 18:45:03.096526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.043 [2024-10-08 18:45:03.096544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.043 [2024-10-08 18:45:03.096551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.043 [2024-10-08 18:45:03.096558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.043 [2024-10-08 18:45:03.096573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.043 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.106512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.106585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.106608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.106615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.106621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.106637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.116556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.116646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.116664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.116671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.116678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.116694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.126682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.126747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.126764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.126772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.126778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.126793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.136707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.136763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.136780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.136788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.136794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.136810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.146780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.146857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.146875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.146882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.146894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.146910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.156827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.156910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.156929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.156937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.156943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.156958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.166787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.166850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.166868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.166875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.166882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.166898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.176828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.176896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.176916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.176924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.176931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.176947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.186883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.186985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.187003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.187011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.187018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.187034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.196926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.197009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.197028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.197036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.197042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.197058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.206922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.207022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.207039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.207046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.207053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.207069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.216957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.217023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.217041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.217049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.217056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.217072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.226961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.227039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.227058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.227065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.227071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.227087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.236923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.237037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.237055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.237063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.237074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.237090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.247030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.247090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.247108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.247115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.247122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.247138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.257063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.257131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.257148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.257156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.257162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.257177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.267081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.267143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.267160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.267168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.267174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.267190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.305 [2024-10-08 18:45:03.277028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.305 [2024-10-08 18:45:03.277090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.305 [2024-10-08 18:45:03.277107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.305 [2024-10-08 18:45:03.277115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.305 [2024-10-08 18:45:03.277121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.305 [2024-10-08 18:45:03.277136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.305 qpair failed and we were unable to recover it. 00:29:09.306 [2024-10-08 18:45:03.287142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.306 [2024-10-08 18:45:03.287216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.306 [2024-10-08 18:45:03.287236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.306 [2024-10-08 18:45:03.287245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.306 [2024-10-08 18:45:03.287255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.306 [2024-10-08 18:45:03.287271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.306 qpair failed and we were unable to recover it. 00:29:09.306 [2024-10-08 18:45:03.297061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.306 [2024-10-08 18:45:03.297120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.306 [2024-10-08 18:45:03.297138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.306 [2024-10-08 18:45:03.297146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.306 [2024-10-08 18:45:03.297152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.306 [2024-10-08 18:45:03.297167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.306 qpair failed and we were unable to recover it. 00:29:09.306 [2024-10-08 18:45:03.307165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.306 [2024-10-08 18:45:03.307221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.306 [2024-10-08 18:45:03.307237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.306 [2024-10-08 18:45:03.307244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.306 [2024-10-08 18:45:03.307251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.306 [2024-10-08 18:45:03.307266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.306 qpair failed and we were unable to recover it. 00:29:09.306 [2024-10-08 18:45:03.317212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.306 [2024-10-08 18:45:03.317273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.306 [2024-10-08 18:45:03.317289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.306 [2024-10-08 18:45:03.317296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.306 [2024-10-08 18:45:03.317302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.306 [2024-10-08 18:45:03.317316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.306 qpair failed and we were unable to recover it. 00:29:09.306 [2024-10-08 18:45:03.327238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.306 [2024-10-08 18:45:03.327298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.306 [2024-10-08 18:45:03.327313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.306 [2024-10-08 18:45:03.327321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.306 [2024-10-08 18:45:03.327332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.306 [2024-10-08 18:45:03.327346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.306 qpair failed and we were unable to recover it. 00:29:09.306 [2024-10-08 18:45:03.337274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.306 [2024-10-08 18:45:03.337361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.306 [2024-10-08 18:45:03.337376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.306 [2024-10-08 18:45:03.337384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.306 [2024-10-08 18:45:03.337390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.306 [2024-10-08 18:45:03.337403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.306 qpair failed and we were unable to recover it. 00:29:09.306 [2024-10-08 18:45:03.347272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.306 [2024-10-08 18:45:03.347353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.306 [2024-10-08 18:45:03.347367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.306 [2024-10-08 18:45:03.347374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.306 [2024-10-08 18:45:03.347380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.306 [2024-10-08 18:45:03.347394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.306 qpair failed and we were unable to recover it. 00:29:09.306 [2024-10-08 18:45:03.357359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.306 [2024-10-08 18:45:03.357438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.306 [2024-10-08 18:45:03.357453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.306 [2024-10-08 18:45:03.357460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.306 [2024-10-08 18:45:03.357466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.306 [2024-10-08 18:45:03.357480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.306 qpair failed and we were unable to recover it. 00:29:09.567 [2024-10-08 18:45:03.367195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.567 [2024-10-08 18:45:03.367254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.567 [2024-10-08 18:45:03.367271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.567 [2024-10-08 18:45:03.367278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.567 [2024-10-08 18:45:03.367284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.567 [2024-10-08 18:45:03.367302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.567 qpair failed and we were unable to recover it. 00:29:09.567 [2024-10-08 18:45:03.377393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.567 [2024-10-08 18:45:03.377443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.567 [2024-10-08 18:45:03.377458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.567 [2024-10-08 18:45:03.377466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.567 [2024-10-08 18:45:03.377472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.567 [2024-10-08 18:45:03.377485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.567 qpair failed and we were unable to recover it. 00:29:09.567 [2024-10-08 18:45:03.387402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.387452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.387467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.387474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.387480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.387494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.397440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.397491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.397506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.397513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.397519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.397533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.407427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.407475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.407489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.407496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.407503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.407516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.417508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.417563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.417576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.417583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.417593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.417607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.427449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.427499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.427513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.427520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.427526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.427539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.437530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.437587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.437601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.437608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.437614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.437628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.447529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.447574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.447588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.447595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.447601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.447614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.457595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.457646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.457659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.457666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.457672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.457686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.467604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.467656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.467670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.467677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.467683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.467696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.477656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.477707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.477721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.477729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.477735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.477751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.487636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.487685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.487701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.487708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.487714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.487728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.497675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.497734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.497760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.497769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.497775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.497795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.507663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.507713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.507738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.507752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.507759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.507777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.517780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.517833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.517847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.517854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.517861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.517875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.527702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.527746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.527761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.527768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.527774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.527788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.537801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.537850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.537864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.537871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.537877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.537891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.547782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.547826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.547840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.547847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.547853] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.547866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.557876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.557922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.557936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.557943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.557950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.557963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.567862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.567910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.567923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.567930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.567936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.567950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.568 [2024-10-08 18:45:03.577756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.568 [2024-10-08 18:45:03.577800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.568 [2024-10-08 18:45:03.577813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.568 [2024-10-08 18:45:03.577820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.568 [2024-10-08 18:45:03.577826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.568 [2024-10-08 18:45:03.577840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.568 qpair failed and we were unable to recover it. 00:29:09.569 [2024-10-08 18:45:03.587908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.569 [2024-10-08 18:45:03.587953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.569 [2024-10-08 18:45:03.587966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.569 [2024-10-08 18:45:03.587978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.569 [2024-10-08 18:45:03.587985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.569 [2024-10-08 18:45:03.587998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.569 qpair failed and we were unable to recover it. 00:29:09.569 [2024-10-08 18:45:03.597857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.569 [2024-10-08 18:45:03.597905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.569 [2024-10-08 18:45:03.597919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.569 [2024-10-08 18:45:03.597929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.569 [2024-10-08 18:45:03.597936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.569 [2024-10-08 18:45:03.597950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.569 qpair failed and we were unable to recover it. 00:29:09.569 [2024-10-08 18:45:03.607952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.569 [2024-10-08 18:45:03.608002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.569 [2024-10-08 18:45:03.608016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.569 [2024-10-08 18:45:03.608023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.569 [2024-10-08 18:45:03.608029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.569 [2024-10-08 18:45:03.608042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.569 qpair failed and we were unable to recover it. 00:29:09.569 [2024-10-08 18:45:03.617942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.569 [2024-10-08 18:45:03.617991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.569 [2024-10-08 18:45:03.618004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.569 [2024-10-08 18:45:03.618011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.569 [2024-10-08 18:45:03.618017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.569 [2024-10-08 18:45:03.618030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.569 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-08 18:45:03.628020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.830 [2024-10-08 18:45:03.628067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.830 [2024-10-08 18:45:03.628080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.830 [2024-10-08 18:45:03.628087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.830 [2024-10-08 18:45:03.628093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.830 [2024-10-08 18:45:03.628106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-08 18:45:03.638077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.830 [2024-10-08 18:45:03.638128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.830 [2024-10-08 18:45:03.638142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.830 [2024-10-08 18:45:03.638149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.830 [2024-10-08 18:45:03.638155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.830 [2024-10-08 18:45:03.638169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.830 qpair failed and we were unable to recover it. 00:29:09.830 [2024-10-08 18:45:03.648067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.830 [2024-10-08 18:45:03.648111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.648125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.648132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.648138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.648151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.658070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.658117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.658130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.658139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.658147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.658160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.668111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.668165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.668178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.668185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.668192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.668205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.678068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.678118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.678132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.678138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.678145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.678159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.688039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.688113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.688126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.688136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.688143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.688157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.698195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.698236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.698250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.698257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.698263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.698276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.708153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.708213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.708226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.708233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.708240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.708253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.718306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.718353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.718366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.718374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.718380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.718393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.728285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.728330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.728343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.728351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.728357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.728370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.738357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.738446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.738459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.738466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.738473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.738486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.748304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.748351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.748365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.748372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.748379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.748393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.758416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.758467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.758480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.758488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.758494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.758507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.768393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.768439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.768453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.768460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.768466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.768479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.778421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.778466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.831 [2024-10-08 18:45:03.778479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.831 [2024-10-08 18:45:03.778489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.831 [2024-10-08 18:45:03.778496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.831 [2024-10-08 18:45:03.778509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.831 qpair failed and we were unable to recover it. 00:29:09.831 [2024-10-08 18:45:03.788448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.831 [2024-10-08 18:45:03.788499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.832 [2024-10-08 18:45:03.788512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.832 [2024-10-08 18:45:03.788519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.832 [2024-10-08 18:45:03.788526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.832 [2024-10-08 18:45:03.788539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-08 18:45:03.798528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.832 [2024-10-08 18:45:03.798577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.832 [2024-10-08 18:45:03.798591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.832 [2024-10-08 18:45:03.798598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.832 [2024-10-08 18:45:03.798604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.832 [2024-10-08 18:45:03.798617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-08 18:45:03.808514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.832 [2024-10-08 18:45:03.808569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.832 [2024-10-08 18:45:03.808582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.832 [2024-10-08 18:45:03.808589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.832 [2024-10-08 18:45:03.808596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.832 [2024-10-08 18:45:03.808609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-08 18:45:03.818550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.832 [2024-10-08 18:45:03.818591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.832 [2024-10-08 18:45:03.818604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.832 [2024-10-08 18:45:03.818611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.832 [2024-10-08 18:45:03.818617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.832 [2024-10-08 18:45:03.818631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-08 18:45:03.828524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.832 [2024-10-08 18:45:03.828570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.832 [2024-10-08 18:45:03.828584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.832 [2024-10-08 18:45:03.828591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.832 [2024-10-08 18:45:03.828597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.832 [2024-10-08 18:45:03.828610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-08 18:45:03.838651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.832 [2024-10-08 18:45:03.838704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.832 [2024-10-08 18:45:03.838717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.832 [2024-10-08 18:45:03.838724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.832 [2024-10-08 18:45:03.838731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.832 [2024-10-08 18:45:03.838744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-08 18:45:03.848491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.832 [2024-10-08 18:45:03.848536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.832 [2024-10-08 18:45:03.848549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.832 [2024-10-08 18:45:03.848556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.832 [2024-10-08 18:45:03.848562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.832 [2024-10-08 18:45:03.848575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-08 18:45:03.858640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.832 [2024-10-08 18:45:03.858685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.832 [2024-10-08 18:45:03.858698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.832 [2024-10-08 18:45:03.858705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.832 [2024-10-08 18:45:03.858711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.832 [2024-10-08 18:45:03.858724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-08 18:45:03.868538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.832 [2024-10-08 18:45:03.868584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.832 [2024-10-08 18:45:03.868600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.832 [2024-10-08 18:45:03.868608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.832 [2024-10-08 18:45:03.868614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.832 [2024-10-08 18:45:03.868627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.832 qpair failed and we were unable to recover it. 00:29:09.832 [2024-10-08 18:45:03.878760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.832 [2024-10-08 18:45:03.878813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.832 [2024-10-08 18:45:03.878826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.832 [2024-10-08 18:45:03.878833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.832 [2024-10-08 18:45:03.878840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:09.832 [2024-10-08 18:45:03.878853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.832 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-08 18:45:03.888723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.094 [2024-10-08 18:45:03.888767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.094 [2024-10-08 18:45:03.888781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.094 [2024-10-08 18:45:03.888788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.094 [2024-10-08 18:45:03.888795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.094 [2024-10-08 18:45:03.888808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-08 18:45:03.898756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.094 [2024-10-08 18:45:03.898802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.094 [2024-10-08 18:45:03.898815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.094 [2024-10-08 18:45:03.898822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.094 [2024-10-08 18:45:03.898828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.094 [2024-10-08 18:45:03.898841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-08 18:45:03.908763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.094 [2024-10-08 18:45:03.908820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.094 [2024-10-08 18:45:03.908835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.094 [2024-10-08 18:45:03.908842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.094 [2024-10-08 18:45:03.908849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.094 [2024-10-08 18:45:03.908865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-08 18:45:03.918840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.094 [2024-10-08 18:45:03.918893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.094 [2024-10-08 18:45:03.918907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.094 [2024-10-08 18:45:03.918915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.094 [2024-10-08 18:45:03.918921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.094 [2024-10-08 18:45:03.918934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-08 18:45:03.928858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.094 [2024-10-08 18:45:03.928940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.094 [2024-10-08 18:45:03.928954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.094 [2024-10-08 18:45:03.928962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.094 [2024-10-08 18:45:03.928970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.094 [2024-10-08 18:45:03.928992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-08 18:45:03.938850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.094 [2024-10-08 18:45:03.938890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.094 [2024-10-08 18:45:03.938904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.094 [2024-10-08 18:45:03.938911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.094 [2024-10-08 18:45:03.938917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.094 [2024-10-08 18:45:03.938931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-08 18:45:03.948884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.094 [2024-10-08 18:45:03.948933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.094 [2024-10-08 18:45:03.948946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.094 [2024-10-08 18:45:03.948953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.094 [2024-10-08 18:45:03.948959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.094 [2024-10-08 18:45:03.948972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-08 18:45:03.958834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.094 [2024-10-08 18:45:03.958895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.094 [2024-10-08 18:45:03.958911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.094 [2024-10-08 18:45:03.958918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.094 [2024-10-08 18:45:03.958924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.094 [2024-10-08 18:45:03.958938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-08 18:45:03.968990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.094 [2024-10-08 18:45:03.969058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.094 [2024-10-08 18:45:03.969071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.094 [2024-10-08 18:45:03.969078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.094 [2024-10-08 18:45:03.969084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.094 [2024-10-08 18:45:03.969097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:03.978957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:03.979003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:03.979017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:03.979024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:03.979030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:03.979043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:03.988989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:03.989035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:03.989048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:03.989055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:03.989061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:03.989075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:03.998958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:03.999057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:03.999071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:03.999078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:03.999084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:03.999105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.009053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.009095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.009109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.009116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.009123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.009136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.018986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.019054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.019068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.019075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.019081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.019095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.029010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.029096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.029109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.029116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.029123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.029136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.039032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.039084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.039098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.039105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.039111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.039125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.049121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.049197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.049215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.049222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.049228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.049241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.059182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.059231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.059244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.059251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.059257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.059271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.069201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.069248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.069261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.069269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.069275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.069288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.079213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.079263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.079277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.079284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.079291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.079304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.089279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.089322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.089336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.089343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.089349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.089366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.099255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.099293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.099307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.099314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.099320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.099333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.109316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.109361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.095 [2024-10-08 18:45:04.109375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.095 [2024-10-08 18:45:04.109382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.095 [2024-10-08 18:45:04.109388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.095 [2024-10-08 18:45:04.109401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-08 18:45:04.119347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.095 [2024-10-08 18:45:04.119398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.096 [2024-10-08 18:45:04.119412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.096 [2024-10-08 18:45:04.119419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.096 [2024-10-08 18:45:04.119425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.096 [2024-10-08 18:45:04.119438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-08 18:45:04.129347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.096 [2024-10-08 18:45:04.129414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.096 [2024-10-08 18:45:04.129427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.096 [2024-10-08 18:45:04.129434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.096 [2024-10-08 18:45:04.129441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.096 [2024-10-08 18:45:04.129454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-08 18:45:04.139371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.096 [2024-10-08 18:45:04.139416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.096 [2024-10-08 18:45:04.139433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.096 [2024-10-08 18:45:04.139440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.096 [2024-10-08 18:45:04.139446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.096 [2024-10-08 18:45:04.139459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-08 18:45:04.149392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.096 [2024-10-08 18:45:04.149438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.096 [2024-10-08 18:45:04.149451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.096 [2024-10-08 18:45:04.149458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.096 [2024-10-08 18:45:04.149465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.096 [2024-10-08 18:45:04.149478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.358 [2024-10-08 18:45:04.159447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.358 [2024-10-08 18:45:04.159493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.358 [2024-10-08 18:45:04.159506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.358 [2024-10-08 18:45:04.159513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.358 [2024-10-08 18:45:04.159519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.358 [2024-10-08 18:45:04.159533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-10-08 18:45:04.169353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.358 [2024-10-08 18:45:04.169406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.358 [2024-10-08 18:45:04.169419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.358 [2024-10-08 18:45:04.169426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.358 [2024-10-08 18:45:04.169433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.358 [2024-10-08 18:45:04.169446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-10-08 18:45:04.179491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.358 [2024-10-08 18:45:04.179537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.358 [2024-10-08 18:45:04.179550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.358 [2024-10-08 18:45:04.179557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.358 [2024-10-08 18:45:04.179564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.358 [2024-10-08 18:45:04.179581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-10-08 18:45:04.189581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.358 [2024-10-08 18:45:04.189656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.358 [2024-10-08 18:45:04.189669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.358 [2024-10-08 18:45:04.189676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.358 [2024-10-08 18:45:04.189683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.358 [2024-10-08 18:45:04.189696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-10-08 18:45:04.199558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.358 [2024-10-08 18:45:04.199606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.358 [2024-10-08 18:45:04.199620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.358 [2024-10-08 18:45:04.199627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.358 [2024-10-08 18:45:04.199633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.358 [2024-10-08 18:45:04.199647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-10-08 18:45:04.209583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.358 [2024-10-08 18:45:04.209624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.358 [2024-10-08 18:45:04.209637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.358 [2024-10-08 18:45:04.209644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.358 [2024-10-08 18:45:04.209651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.358 [2024-10-08 18:45:04.209664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-10-08 18:45:04.219615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.358 [2024-10-08 18:45:04.219685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.358 [2024-10-08 18:45:04.219698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.358 [2024-10-08 18:45:04.219705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.358 [2024-10-08 18:45:04.219711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.358 [2024-10-08 18:45:04.219725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.358 qpair failed and we were unable to recover it. 00:29:10.358 [2024-10-08 18:45:04.229619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.229675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.229694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.229701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.229707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.229721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.239568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.239611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.239625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.239632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.239638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.239651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.249677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.249733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.249746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.249754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.249760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.249773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.259701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.259750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.259774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.259783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.259790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.259808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.269750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.269799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.269824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.269832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.269839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.269862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.279761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.279816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.279841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.279850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.279857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.279875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.289798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.289843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.289859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.289866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.289872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.289887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.299801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.299849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.299863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.299870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.299876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.299890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.309868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.309925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.309940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.309948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.309955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.309969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.319879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.319928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.319945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.319952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.319959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.319972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.329900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.329950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.329964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.329971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.329982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.329995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.339901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.339945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.339959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.339965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.339972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.339990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.349968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.350021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.350034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.350041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.350047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.350061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.359994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.360086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.359 [2024-10-08 18:45:04.360100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.359 [2024-10-08 18:45:04.360107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.359 [2024-10-08 18:45:04.360117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.359 [2024-10-08 18:45:04.360131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.359 qpair failed and we were unable to recover it. 00:29:10.359 [2024-10-08 18:45:04.369883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.359 [2024-10-08 18:45:04.369925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.360 [2024-10-08 18:45:04.369939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.360 [2024-10-08 18:45:04.369946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.360 [2024-10-08 18:45:04.369952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.360 [2024-10-08 18:45:04.369966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-10-08 18:45:04.380023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.360 [2024-10-08 18:45:04.380074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.360 [2024-10-08 18:45:04.380087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.360 [2024-10-08 18:45:04.380094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.360 [2024-10-08 18:45:04.380100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.360 [2024-10-08 18:45:04.380114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-10-08 18:45:04.390043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.360 [2024-10-08 18:45:04.390090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.360 [2024-10-08 18:45:04.390104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.360 [2024-10-08 18:45:04.390110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.360 [2024-10-08 18:45:04.390117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.360 [2024-10-08 18:45:04.390130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-10-08 18:45:04.400120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.360 [2024-10-08 18:45:04.400167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.360 [2024-10-08 18:45:04.400181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.360 [2024-10-08 18:45:04.400188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.360 [2024-10-08 18:45:04.400194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.360 [2024-10-08 18:45:04.400208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.360 [2024-10-08 18:45:04.410128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.360 [2024-10-08 18:45:04.410207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.360 [2024-10-08 18:45:04.410221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.360 [2024-10-08 18:45:04.410228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.360 [2024-10-08 18:45:04.410235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.360 [2024-10-08 18:45:04.410248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.360 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-08 18:45:04.420154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.622 [2024-10-08 18:45:04.420196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.622 [2024-10-08 18:45:04.420209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.622 [2024-10-08 18:45:04.420216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.622 [2024-10-08 18:45:04.420223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.622 [2024-10-08 18:45:04.420236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-08 18:45:04.430177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.622 [2024-10-08 18:45:04.430223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.622 [2024-10-08 18:45:04.430237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.622 [2024-10-08 18:45:04.430243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.622 [2024-10-08 18:45:04.430250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.622 [2024-10-08 18:45:04.430263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-08 18:45:04.440251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.622 [2024-10-08 18:45:04.440299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.622 [2024-10-08 18:45:04.440312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.622 [2024-10-08 18:45:04.440319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.622 [2024-10-08 18:45:04.440325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.622 [2024-10-08 18:45:04.440338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.622 qpair failed and we were unable to recover it. 00:29:10.622 [2024-10-08 18:45:04.450088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.622 [2024-10-08 18:45:04.450131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.622 [2024-10-08 18:45:04.450145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.450152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.450162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.450175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.460253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.460301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.460315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.460321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.460328] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.460341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.470295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.470344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.470357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.470364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.470370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.470383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.480251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.480313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.480327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.480334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.480340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.480353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.490335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.490377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.490390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.490397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.490403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.490416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.500347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.500399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.500415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.500422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.500428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.500442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.510397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.510442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.510456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.510463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.510469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.510482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.520418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.520468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.520481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.520488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.520494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.520507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.530318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.530363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.530377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.530384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.530390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.530403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.540456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.540501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.540514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.540521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.540531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.540544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.550378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.550426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.550439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.550446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.550452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.550466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.560542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.560593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.560607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.560613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.560619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.560633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.570423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.570470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.570483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.570490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.570496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.570509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.580586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.580633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.580646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.580653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.623 [2024-10-08 18:45:04.580659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.623 [2024-10-08 18:45:04.580672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.623 qpair failed and we were unable to recover it. 00:29:10.623 [2024-10-08 18:45:04.590645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.623 [2024-10-08 18:45:04.590729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.623 [2024-10-08 18:45:04.590742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.623 [2024-10-08 18:45:04.590749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.624 [2024-10-08 18:45:04.590755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.624 [2024-10-08 18:45:04.590768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-08 18:45:04.600621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.624 [2024-10-08 18:45:04.600695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.624 [2024-10-08 18:45:04.600709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.624 [2024-10-08 18:45:04.600717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.624 [2024-10-08 18:45:04.600723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.624 [2024-10-08 18:45:04.600736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-08 18:45:04.610660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.624 [2024-10-08 18:45:04.610710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.624 [2024-10-08 18:45:04.610724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.624 [2024-10-08 18:45:04.610730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.624 [2024-10-08 18:45:04.610737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.624 [2024-10-08 18:45:04.610750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-08 18:45:04.620683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.624 [2024-10-08 18:45:04.620725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.624 [2024-10-08 18:45:04.620738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.624 [2024-10-08 18:45:04.620745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.624 [2024-10-08 18:45:04.620752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.624 [2024-10-08 18:45:04.620765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-08 18:45:04.630715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.624 [2024-10-08 18:45:04.630762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.624 [2024-10-08 18:45:04.630775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.624 [2024-10-08 18:45:04.630782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.624 [2024-10-08 18:45:04.630795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.624 [2024-10-08 18:45:04.630808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-08 18:45:04.640718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.624 [2024-10-08 18:45:04.640769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.624 [2024-10-08 18:45:04.640782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.624 [2024-10-08 18:45:04.640789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.624 [2024-10-08 18:45:04.640796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.624 [2024-10-08 18:45:04.640809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-08 18:45:04.650767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.624 [2024-10-08 18:45:04.650856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.624 [2024-10-08 18:45:04.650869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.624 [2024-10-08 18:45:04.650876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.624 [2024-10-08 18:45:04.650883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.624 [2024-10-08 18:45:04.650897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-08 18:45:04.660791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.624 [2024-10-08 18:45:04.660839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.624 [2024-10-08 18:45:04.660853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.624 [2024-10-08 18:45:04.660860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.624 [2024-10-08 18:45:04.660866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.624 [2024-10-08 18:45:04.660879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.624 [2024-10-08 18:45:04.670808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.624 [2024-10-08 18:45:04.670850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.624 [2024-10-08 18:45:04.670863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.624 [2024-10-08 18:45:04.670870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.624 [2024-10-08 18:45:04.670877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.624 [2024-10-08 18:45:04.670890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.624 qpair failed and we were unable to recover it. 00:29:10.888 [2024-10-08 18:45:04.680865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.888 [2024-10-08 18:45:04.680916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.888 [2024-10-08 18:45:04.680930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.888 [2024-10-08 18:45:04.680937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.888 [2024-10-08 18:45:04.680943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.888 [2024-10-08 18:45:04.680956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-10-08 18:45:04.690874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.888 [2024-10-08 18:45:04.690926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.888 [2024-10-08 18:45:04.690940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.888 [2024-10-08 18:45:04.690947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.888 [2024-10-08 18:45:04.690953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.888 [2024-10-08 18:45:04.690966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-10-08 18:45:04.700899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.888 [2024-10-08 18:45:04.700967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.888 [2024-10-08 18:45:04.700985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.888 [2024-10-08 18:45:04.700992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.888 [2024-10-08 18:45:04.700998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.888 [2024-10-08 18:45:04.701011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-10-08 18:45:04.710945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.888 [2024-10-08 18:45:04.710996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.888 [2024-10-08 18:45:04.711009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.888 [2024-10-08 18:45:04.711017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.888 [2024-10-08 18:45:04.711023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.888 [2024-10-08 18:45:04.711037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-10-08 18:45:04.720970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.888 [2024-10-08 18:45:04.721018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.888 [2024-10-08 18:45:04.721032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.888 [2024-10-08 18:45:04.721043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.888 [2024-10-08 18:45:04.721049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.888 [2024-10-08 18:45:04.721062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-10-08 18:45:04.730984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.888 [2024-10-08 18:45:04.731027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.731040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.731047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.731053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.731066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.741018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.741072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.741086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.741093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.741099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.741113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.751054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.751102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.751115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.751122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.751128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.751141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.761069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.761121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.761135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.761142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.761148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.761161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.771090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.771137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.771150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.771157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.771163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.771177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.781119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.781166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.781179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.781186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.781192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.781205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.791147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.791196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.791210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.791216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.791223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.791236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.801176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.801222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.801236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.801243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.801249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.801262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.811209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.811252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.811266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.811277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.811283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.811297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.821215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.821257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.821270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.821278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.821286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.821299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.831259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.831307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.831320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.831326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.831333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.831346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.841157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.841204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.841217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.841224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.841231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.841243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.851311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.851357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.851372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.851382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.851389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.851403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.861312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.861355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.861369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.889 [2024-10-08 18:45:04.861376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.889 [2024-10-08 18:45:04.861383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.889 [2024-10-08 18:45:04.861397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-10-08 18:45:04.871358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.889 [2024-10-08 18:45:04.871445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.889 [2024-10-08 18:45:04.871459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.890 [2024-10-08 18:45:04.871466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.890 [2024-10-08 18:45:04.871472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.890 [2024-10-08 18:45:04.871485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-10-08 18:45:04.881411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.890 [2024-10-08 18:45:04.881463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.890 [2024-10-08 18:45:04.881476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.890 [2024-10-08 18:45:04.881483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.890 [2024-10-08 18:45:04.881489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.890 [2024-10-08 18:45:04.881502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-10-08 18:45:04.891400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.890 [2024-10-08 18:45:04.891482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.890 [2024-10-08 18:45:04.891496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.890 [2024-10-08 18:45:04.891503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.890 [2024-10-08 18:45:04.891509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.890 [2024-10-08 18:45:04.891522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-10-08 18:45:04.901436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.890 [2024-10-08 18:45:04.901488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.890 [2024-10-08 18:45:04.901502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.890 [2024-10-08 18:45:04.901513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.890 [2024-10-08 18:45:04.901519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.890 [2024-10-08 18:45:04.901532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-10-08 18:45:04.911358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.890 [2024-10-08 18:45:04.911404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.890 [2024-10-08 18:45:04.911417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.890 [2024-10-08 18:45:04.911424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.890 [2024-10-08 18:45:04.911431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.890 [2024-10-08 18:45:04.911444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-10-08 18:45:04.921506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.890 [2024-10-08 18:45:04.921553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.890 [2024-10-08 18:45:04.921566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.890 [2024-10-08 18:45:04.921573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.890 [2024-10-08 18:45:04.921579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.890 [2024-10-08 18:45:04.921592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-10-08 18:45:04.931525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.890 [2024-10-08 18:45:04.931568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.890 [2024-10-08 18:45:04.931581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.890 [2024-10-08 18:45:04.931588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.890 [2024-10-08 18:45:04.931594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.890 [2024-10-08 18:45:04.931607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-10-08 18:45:04.941540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.890 [2024-10-08 18:45:04.941582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.890 [2024-10-08 18:45:04.941596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.890 [2024-10-08 18:45:04.941603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.890 [2024-10-08 18:45:04.941609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:10.890 [2024-10-08 18:45:04.941622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.890 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:04.951582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:04.951680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.152 [2024-10-08 18:45:04.951695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.152 [2024-10-08 18:45:04.951702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.152 [2024-10-08 18:45:04.951709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.152 [2024-10-08 18:45:04.951726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.152 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:04.961576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:04.961625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.152 [2024-10-08 18:45:04.961639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.152 [2024-10-08 18:45:04.961646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.152 [2024-10-08 18:45:04.961652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.152 [2024-10-08 18:45:04.961665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.152 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:04.971627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:04.971719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.152 [2024-10-08 18:45:04.971732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.152 [2024-10-08 18:45:04.971739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.152 [2024-10-08 18:45:04.971746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.152 [2024-10-08 18:45:04.971759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.152 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:04.981543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:04.981593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.152 [2024-10-08 18:45:04.981606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.152 [2024-10-08 18:45:04.981613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.152 [2024-10-08 18:45:04.981619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.152 [2024-10-08 18:45:04.981632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.152 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:04.991569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:04.991618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.152 [2024-10-08 18:45:04.991631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.152 [2024-10-08 18:45:04.991641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.152 [2024-10-08 18:45:04.991648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.152 [2024-10-08 18:45:04.991661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.152 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:05.001740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:05.001792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.152 [2024-10-08 18:45:05.001805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.152 [2024-10-08 18:45:05.001813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.152 [2024-10-08 18:45:05.001819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.152 [2024-10-08 18:45:05.001832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.152 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:05.011740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:05.011799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.152 [2024-10-08 18:45:05.011813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.152 [2024-10-08 18:45:05.011820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.152 [2024-10-08 18:45:05.011826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.152 [2024-10-08 18:45:05.011839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.152 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:05.021791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:05.021836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.152 [2024-10-08 18:45:05.021849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.152 [2024-10-08 18:45:05.021856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.152 [2024-10-08 18:45:05.021862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.152 [2024-10-08 18:45:05.021875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.152 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:05.031806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:05.031854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.152 [2024-10-08 18:45:05.031868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.152 [2024-10-08 18:45:05.031875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.152 [2024-10-08 18:45:05.031881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.152 [2024-10-08 18:45:05.031894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.152 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:05.041852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:05.041899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.152 [2024-10-08 18:45:05.041912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.152 [2024-10-08 18:45:05.041920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.152 [2024-10-08 18:45:05.041926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.152 [2024-10-08 18:45:05.041939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.152 qpair failed and we were unable to recover it. 00:29:11.152 [2024-10-08 18:45:05.051860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.152 [2024-10-08 18:45:05.051906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.051920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.051927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.051933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.051947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.061881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.061924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.061938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.061945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.061951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.061964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.071914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.072003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.072017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.072024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.072030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.072044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.081953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.082000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.082017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.082024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.082030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.082044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.091968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.092019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.092033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.092039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.092046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.092060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.101865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.101907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.101920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.101927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.101934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.101946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.112028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.112079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.112093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.112100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.112107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.112120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.122112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.122191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.122204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.122211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.122217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.122230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.132081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.132128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.132141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.132148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.132154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.132167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.142128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.142172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.142185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.142192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.142198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.142211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.152128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.152175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.152188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.152195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.152201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.152214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.162211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.162283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.162296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.162303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.162309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.162322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.172154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.172194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.172210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.172217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.172224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.172237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.182206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.182293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.182306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.153 [2024-10-08 18:45:05.182313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.153 [2024-10-08 18:45:05.182319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.153 [2024-10-08 18:45:05.182332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.153 qpair failed and we were unable to recover it. 00:29:11.153 [2024-10-08 18:45:05.192234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.153 [2024-10-08 18:45:05.192281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.153 [2024-10-08 18:45:05.192295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.154 [2024-10-08 18:45:05.192301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.154 [2024-10-08 18:45:05.192308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.154 [2024-10-08 18:45:05.192321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.154 qpair failed and we were unable to recover it. 00:29:11.154 [2024-10-08 18:45:05.202300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.154 [2024-10-08 18:45:05.202381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.154 [2024-10-08 18:45:05.202395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.154 [2024-10-08 18:45:05.202402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.154 [2024-10-08 18:45:05.202408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.154 [2024-10-08 18:45:05.202421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.154 qpair failed and we were unable to recover it. 00:29:11.415 [2024-10-08 18:45:05.212290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.415 [2024-10-08 18:45:05.212336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.415 [2024-10-08 18:45:05.212349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.415 [2024-10-08 18:45:05.212356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.415 [2024-10-08 18:45:05.212362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.415 [2024-10-08 18:45:05.212379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-10-08 18:45:05.222306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.222347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.222361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.222368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.222374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.222387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.232347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.232391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.232404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.232411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.232417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.232430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.242387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.242439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.242453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.242460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.242466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.242480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.252407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.252451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.252465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.252471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.252478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.252491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.262432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.262475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.262492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.262499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.262505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.262518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.272463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.272505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.272518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.272525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.272531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.272544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.282468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.282517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.282530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.282537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.282543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.282556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.292508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.292551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.292565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.292571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.292578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.292591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.302535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.302579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.302593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.302600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.302606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.302627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.312567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.312613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.312627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.312634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.312640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.312653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.322474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.322526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.322539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.322546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.322552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.322565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.332485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.332551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.332564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.332571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.332577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.332590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.342624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.342662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.342675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.342682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.342688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.342701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.352653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.416 [2024-10-08 18:45:05.352706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.416 [2024-10-08 18:45:05.352723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.416 [2024-10-08 18:45:05.352730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.416 [2024-10-08 18:45:05.352736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.416 [2024-10-08 18:45:05.352749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-10-08 18:45:05.362647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.362702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.362715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.362722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.362728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.362742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-10-08 18:45:05.372693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.372743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.372768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.372777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.372784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.372803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-10-08 18:45:05.382681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.382776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.382791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.382798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.382805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.382819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-10-08 18:45:05.392795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.392847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.392871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.392880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.392886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.392909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-10-08 18:45:05.402807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.402872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.402887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.402894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.402901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.402916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-10-08 18:45:05.412740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.412798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.412812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.412819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.412825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.412839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-10-08 18:45:05.422853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.422906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.422919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.422927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.422933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.422946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-10-08 18:45:05.432894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.432966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.432983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.432990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.432996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.433010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-10-08 18:45:05.442971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.443059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.443075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.443082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.443089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.443102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-10-08 18:45:05.452940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.452992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.453006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.453013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.453019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.453033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-10-08 18:45:05.462963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.417 [2024-10-08 18:45:05.463005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.417 [2024-10-08 18:45:05.463018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.417 [2024-10-08 18:45:05.463025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.417 [2024-10-08 18:45:05.463032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.417 [2024-10-08 18:45:05.463045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.680 [2024-10-08 18:45:05.472994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.680 [2024-10-08 18:45:05.473040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.680 [2024-10-08 18:45:05.473053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.680 [2024-10-08 18:45:05.473060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.680 [2024-10-08 18:45:05.473066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.680 [2024-10-08 18:45:05.473080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-10-08 18:45:05.483025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.680 [2024-10-08 18:45:05.483088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.680 [2024-10-08 18:45:05.483102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.680 [2024-10-08 18:45:05.483110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.680 [2024-10-08 18:45:05.483116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.680 [2024-10-08 18:45:05.483133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-10-08 18:45:05.493107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.680 [2024-10-08 18:45:05.493163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.680 [2024-10-08 18:45:05.493177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.680 [2024-10-08 18:45:05.493184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.680 [2024-10-08 18:45:05.493190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.680 [2024-10-08 18:45:05.493203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-10-08 18:45:05.503065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.680 [2024-10-08 18:45:05.503111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.680 [2024-10-08 18:45:05.503126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.680 [2024-10-08 18:45:05.503134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.680 [2024-10-08 18:45:05.503140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.680 [2024-10-08 18:45:05.503154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-10-08 18:45:05.513031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.680 [2024-10-08 18:45:05.513078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.680 [2024-10-08 18:45:05.513091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.680 [2024-10-08 18:45:05.513098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.680 [2024-10-08 18:45:05.513105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.680 [2024-10-08 18:45:05.513118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-10-08 18:45:05.523145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.680 [2024-10-08 18:45:05.523192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.680 [2024-10-08 18:45:05.523206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.680 [2024-10-08 18:45:05.523213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.680 [2024-10-08 18:45:05.523219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.680 [2024-10-08 18:45:05.523232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-10-08 18:45:05.533052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.680 [2024-10-08 18:45:05.533096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.680 [2024-10-08 18:45:05.533113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.680 [2024-10-08 18:45:05.533120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.680 [2024-10-08 18:45:05.533126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.680 [2024-10-08 18:45:05.533139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-10-08 18:45:05.543189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.680 [2024-10-08 18:45:05.543234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.680 [2024-10-08 18:45:05.543248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.680 [2024-10-08 18:45:05.543255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.680 [2024-10-08 18:45:05.543261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.680 [2024-10-08 18:45:05.543274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-10-08 18:45:05.553112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.680 [2024-10-08 18:45:05.553168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.680 [2024-10-08 18:45:05.553182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.680 [2024-10-08 18:45:05.553189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.680 [2024-10-08 18:45:05.553195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.553208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.563126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.563182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.563195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.563202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.563208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.563221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.573131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.573177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.573190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.573197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.573207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.573220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.583169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.583230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.583243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.583250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.583256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.583269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.593324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.593374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.593387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.593394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.593400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.593413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.603226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.603276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.603290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.603297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.603303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.603316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.613370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.613414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.613427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.613433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.613440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.613453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.623269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.623317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.623330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.623337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.623343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.623356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.633313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.633361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.633374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.633381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.633387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.633400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.643425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.643471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.643485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.643492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.643498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.643511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.653470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.653515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.653528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.653535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.653541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.653554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.663515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.663571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.663584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.663591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.663601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.663614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.673536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.673582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.673596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.673602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.673608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.673621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.683536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.683584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.683597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.683604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.683610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.683623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-10-08 18:45:05.693616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.681 [2024-10-08 18:45:05.693693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.681 [2024-10-08 18:45:05.693706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.681 [2024-10-08 18:45:05.693713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.681 [2024-10-08 18:45:05.693720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.681 [2024-10-08 18:45:05.693732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-10-08 18:45:05.703617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.682 [2024-10-08 18:45:05.703662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.682 [2024-10-08 18:45:05.703676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.682 [2024-10-08 18:45:05.703683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.682 [2024-10-08 18:45:05.703689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.682 [2024-10-08 18:45:05.703702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-10-08 18:45:05.713649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.682 [2024-10-08 18:45:05.713703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.682 [2024-10-08 18:45:05.713716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.682 [2024-10-08 18:45:05.713723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.682 [2024-10-08 18:45:05.713729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.682 [2024-10-08 18:45:05.713743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-10-08 18:45:05.723546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.682 [2024-10-08 18:45:05.723590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.682 [2024-10-08 18:45:05.723603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.682 [2024-10-08 18:45:05.723610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.682 [2024-10-08 18:45:05.723616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.682 [2024-10-08 18:45:05.723630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-10-08 18:45:05.733689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.682 [2024-10-08 18:45:05.733744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.682 [2024-10-08 18:45:05.733758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.682 [2024-10-08 18:45:05.733764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.682 [2024-10-08 18:45:05.733771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.682 [2024-10-08 18:45:05.733784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.943 [2024-10-08 18:45:05.743718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.943 [2024-10-08 18:45:05.743766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.943 [2024-10-08 18:45:05.743779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.943 [2024-10-08 18:45:05.743786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.943 [2024-10-08 18:45:05.743793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.943 [2024-10-08 18:45:05.743806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.943 qpair failed and we were unable to recover it. 00:29:11.943 [2024-10-08 18:45:05.753763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.943 [2024-10-08 18:45:05.753854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.943 [2024-10-08 18:45:05.753868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.943 [2024-10-08 18:45:05.753875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.943 [2024-10-08 18:45:05.753884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.943 [2024-10-08 18:45:05.753898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.943 qpair failed and we were unable to recover it. 00:29:11.943 [2024-10-08 18:45:05.763787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.943 [2024-10-08 18:45:05.763835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.763849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.763856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.763862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.763875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.773808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.773852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.773865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.773873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.773879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.773892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.783875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.783946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.783959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.783966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.783972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.783990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.793852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.793900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.793913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.793920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.793926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.793939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.803900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.803950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.803964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.803971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.803981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.803995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.813914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.813961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.813977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.813985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.813991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.814004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.823943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.824028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.824042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.824049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.824055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.824068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.833967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.834016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.834030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.834037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.834043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.834056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.843865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.843937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.843950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.843957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.843966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.843983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.854026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.854104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.854118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.854125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.854131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.854145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.864057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.864105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.864118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.864125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.864131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.864144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.874092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.874136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.874149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.874156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.874162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.874175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.884127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.884175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.884188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.884195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.884201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.884214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.894122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.894170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.894183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.894190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.944 [2024-10-08 18:45:05.894196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.944 [2024-10-08 18:45:05.894210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.944 qpair failed and we were unable to recover it. 00:29:11.944 [2024-10-08 18:45:05.904138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.944 [2024-10-08 18:45:05.904179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.944 [2024-10-08 18:45:05.904193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.944 [2024-10-08 18:45:05.904200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.945 [2024-10-08 18:45:05.904206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.945 [2024-10-08 18:45:05.904219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.945 qpair failed and we were unable to recover it. 00:29:11.945 [2024-10-08 18:45:05.914202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.945 [2024-10-08 18:45:05.914250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.945 [2024-10-08 18:45:05.914264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.945 [2024-10-08 18:45:05.914271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.945 [2024-10-08 18:45:05.914277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.945 [2024-10-08 18:45:05.914290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.945 qpair failed and we were unable to recover it. 00:29:11.945 [2024-10-08 18:45:05.924263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.945 [2024-10-08 18:45:05.924326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.945 [2024-10-08 18:45:05.924340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.945 [2024-10-08 18:45:05.924347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.945 [2024-10-08 18:45:05.924353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.945 [2024-10-08 18:45:05.924366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.945 qpair failed and we were unable to recover it. 00:29:11.945 [2024-10-08 18:45:05.934257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.945 [2024-10-08 18:45:05.934301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.945 [2024-10-08 18:45:05.934314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.945 [2024-10-08 18:45:05.934325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.945 [2024-10-08 18:45:05.934331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.945 [2024-10-08 18:45:05.934344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.945 qpair failed and we were unable to recover it. 00:29:11.945 [2024-10-08 18:45:05.944268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.945 [2024-10-08 18:45:05.944319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.945 [2024-10-08 18:45:05.944332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.945 [2024-10-08 18:45:05.944340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.945 [2024-10-08 18:45:05.944346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.945 [2024-10-08 18:45:05.944359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.945 qpair failed and we were unable to recover it. 00:29:11.945 [2024-10-08 18:45:05.954268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.945 [2024-10-08 18:45:05.954339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.945 [2024-10-08 18:45:05.954352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.945 [2024-10-08 18:45:05.954359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.945 [2024-10-08 18:45:05.954366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.945 [2024-10-08 18:45:05.954379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.945 qpair failed and we were unable to recover it. 00:29:11.945 [2024-10-08 18:45:05.964327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.945 [2024-10-08 18:45:05.964374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.945 [2024-10-08 18:45:05.964388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.945 [2024-10-08 18:45:05.964395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.945 [2024-10-08 18:45:05.964401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.945 [2024-10-08 18:45:05.964414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.945 qpair failed and we were unable to recover it. 00:29:11.945 [2024-10-08 18:45:05.974379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.945 [2024-10-08 18:45:05.974462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.945 [2024-10-08 18:45:05.974475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.945 [2024-10-08 18:45:05.974483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.945 [2024-10-08 18:45:05.974490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.945 [2024-10-08 18:45:05.974503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.945 qpair failed and we were unable to recover it. 00:29:11.945 [2024-10-08 18:45:05.984240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.945 [2024-10-08 18:45:05.984281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.945 [2024-10-08 18:45:05.984294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.945 [2024-10-08 18:45:05.984301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.945 [2024-10-08 18:45:05.984308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.945 [2024-10-08 18:45:05.984320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.945 qpair failed and we were unable to recover it. 00:29:11.945 [2024-10-08 18:45:05.994394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.945 [2024-10-08 18:45:05.994442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.945 [2024-10-08 18:45:05.994455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.945 [2024-10-08 18:45:05.994462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.945 [2024-10-08 18:45:05.994468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:11.945 [2024-10-08 18:45:05.994481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.945 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.004317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.004378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.004392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.004399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.004405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.004418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.014498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.014544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.014557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.014564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.014570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.014583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.024465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.024509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.024522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.024533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.024539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.024552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.034385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.034430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.034444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.034451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.034457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.034470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.044550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.044600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.044614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.044622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.044629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.044642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.054435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.054476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.054489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.054496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.054502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.054515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.064598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.064646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.064659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.064666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.064673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.064686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.074599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.074644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.074657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.074664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.074671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.074684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.084623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.084673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.084687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.084693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.084700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.084713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.094663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.094757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.094770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.094777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.094784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.094797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.104700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.104748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.104762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.104769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.104776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.104789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.114725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.114771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.209 [2024-10-08 18:45:06.114784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.209 [2024-10-08 18:45:06.114798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.209 [2024-10-08 18:45:06.114805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.209 [2024-10-08 18:45:06.114818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.209 qpair failed and we were unable to recover it. 00:29:12.209 [2024-10-08 18:45:06.124817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.209 [2024-10-08 18:45:06.124866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.124880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.124887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.124893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.124906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.134774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.134821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.134835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.134842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.134848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.134861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.144791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.144838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.144853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.144860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.144866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.144879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.154751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.154807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.154820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.154827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.154833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.154846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.164886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.164932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.164945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.164952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.164959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.164972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.174928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.175007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.175023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.175030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.175039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.175055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.184782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.184828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.184842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.184849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.184856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.184869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.194958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.195012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.195026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.195033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.195039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.195053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.204947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.205016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.205030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.205041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.205047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.205060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.215040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.215085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.215098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.215105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.215111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.215125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.225035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.225084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.225098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.225105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.225111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.225124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.235044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.235089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.235102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.235109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.235116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.235129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.245115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.245171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.245184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.245191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.245198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.245211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.210 [2024-10-08 18:45:06.255099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.210 [2024-10-08 18:45:06.255144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.210 [2024-10-08 18:45:06.255157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.210 [2024-10-08 18:45:06.255164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.210 [2024-10-08 18:45:06.255170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.210 [2024-10-08 18:45:06.255184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.210 qpair failed and we were unable to recover it. 00:29:12.472 [2024-10-08 18:45:06.265016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.472 [2024-10-08 18:45:06.265065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.472 [2024-10-08 18:45:06.265079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.472 [2024-10-08 18:45:06.265086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.472 [2024-10-08 18:45:06.265092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.472 [2024-10-08 18:45:06.265106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.472 qpair failed and we were unable to recover it. 00:29:12.472 [2024-10-08 18:45:06.275178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.472 [2024-10-08 18:45:06.275221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.472 [2024-10-08 18:45:06.275234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.472 [2024-10-08 18:45:06.275241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.472 [2024-10-08 18:45:06.275247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.472 [2024-10-08 18:45:06.275261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.472 qpair failed and we were unable to recover it. 00:29:12.472 [2024-10-08 18:45:06.285200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.285243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.285256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.285263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.285269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.285281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.295181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.295228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.295245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.295252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.295258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.295272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.305257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.305299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.305313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.305320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.305326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.305339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.315151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.315195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.315209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.315216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.315222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.315235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.325270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.325318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.325331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.325338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.325345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.325358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.335284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.335331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.335344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.335351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.335358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.335371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.345218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.345262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.345276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.345282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.345289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.345302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.355387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.355439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.355451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.355458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.355464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.355478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.365421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.365473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.365487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.365493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.365500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.365513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.375313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.375370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.375383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.375390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.375397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.375410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.385446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.385502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.385518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.385525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.385531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.385545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.395494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.395538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.395553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.395560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.395566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.395579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.405504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.405551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.405564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.405571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.405577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.405591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.415531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.415583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.473 [2024-10-08 18:45:06.415596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.473 [2024-10-08 18:45:06.415603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.473 [2024-10-08 18:45:06.415609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.473 [2024-10-08 18:45:06.415623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.473 qpair failed and we were unable to recover it. 00:29:12.473 [2024-10-08 18:45:06.425545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.473 [2024-10-08 18:45:06.425612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.474 [2024-10-08 18:45:06.425626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.474 [2024-10-08 18:45:06.425633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.474 [2024-10-08 18:45:06.425639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.474 [2024-10-08 18:45:06.425655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.474 qpair failed and we were unable to recover it. 00:29:12.474 [2024-10-08 18:45:06.435587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.474 [2024-10-08 18:45:06.435634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.474 [2024-10-08 18:45:06.435647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.474 [2024-10-08 18:45:06.435654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.474 [2024-10-08 18:45:06.435661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b1550 00:29:12.474 [2024-10-08 18:45:06.435673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.474 qpair failed and we were unable to recover it. 00:29:12.474 [2024-10-08 18:45:06.435798] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:12.474 A controller has encountered a failure and is being reset. 00:29:12.474 [2024-10-08 18:45:06.435902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19af0f0 (9): Bad file descriptor 00:29:12.474 Controller properly reset. 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Read completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 Write completed with error (sct=0, sc=8) 00:29:12.474 starting I/O failed 00:29:12.474 [2024-10-08 18:45:06.456515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.474 Initializing NVMe Controllers 00:29:12.474 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:12.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:12.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:12.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:12.474 Initialization complete. Launching workers. 00:29:12.474 Starting thread on core 1 00:29:12.474 Starting thread on core 2 00:29:12.474 Starting thread on core 3 00:29:12.474 Starting thread on core 0 00:29:12.474 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:12.474 00:29:12.474 real 0m10.771s 00:29:12.474 user 0m19.749s 00:29:12.474 sys 0m3.839s 00:29:12.474 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:12.474 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.474 ************************************ 00:29:12.474 END TEST nvmf_target_disconnect_tc2 00:29:12.474 ************************************ 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.735 rmmod nvme_tcp 00:29:12.735 rmmod nvme_fabrics 00:29:12.735 rmmod nvme_keyring 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1413313 ']' 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1413313 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1413313 ']' 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1413313 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1413313 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1413313' 00:29:12.735 killing process with pid 1413313 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1413313 00:29:12.735 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1413313 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.996 18:45:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.907 18:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.907 00:29:14.907 real 0m21.285s 00:29:14.907 user 0m47.066s 00:29:14.907 sys 0m10.154s 00:29:14.907 18:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:14.907 18:45:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:14.907 ************************************ 00:29:14.907 END TEST nvmf_target_disconnect 00:29:14.907 ************************************ 00:29:14.907 18:45:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:14.907 00:29:14.907 real 6m34.289s 00:29:14.907 user 11m16.595s 00:29:14.907 sys 2m17.566s 00:29:14.907 18:45:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:14.907 18:45:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.907 ************************************ 00:29:14.907 END TEST nvmf_host 00:29:14.907 ************************************ 00:29:15.173 18:45:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:15.173 18:45:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:15.173 18:45:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:15.173 18:45:08 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:15.173 18:45:08 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.173 18:45:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.173 ************************************ 00:29:15.173 START TEST nvmf_target_core_interrupt_mode 00:29:15.173 ************************************ 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:15.173 * Looking for test storage... 00:29:15.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:15.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.173 --rc genhtml_branch_coverage=1 00:29:15.173 --rc genhtml_function_coverage=1 00:29:15.173 --rc genhtml_legend=1 00:29:15.173 --rc geninfo_all_blocks=1 00:29:15.173 --rc geninfo_unexecuted_blocks=1 00:29:15.173 00:29:15.173 ' 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:15.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.173 --rc genhtml_branch_coverage=1 00:29:15.173 --rc genhtml_function_coverage=1 00:29:15.173 --rc genhtml_legend=1 00:29:15.173 --rc geninfo_all_blocks=1 00:29:15.173 --rc geninfo_unexecuted_blocks=1 00:29:15.173 00:29:15.173 ' 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:15.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.173 --rc genhtml_branch_coverage=1 00:29:15.173 --rc genhtml_function_coverage=1 00:29:15.173 --rc genhtml_legend=1 00:29:15.173 --rc geninfo_all_blocks=1 00:29:15.173 --rc geninfo_unexecuted_blocks=1 00:29:15.173 00:29:15.173 ' 00:29:15.173 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:15.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.173 --rc genhtml_branch_coverage=1 00:29:15.173 --rc genhtml_function_coverage=1 00:29:15.173 --rc genhtml_legend=1 00:29:15.173 --rc geninfo_all_blocks=1 00:29:15.173 --rc geninfo_unexecuted_blocks=1 00:29:15.173 00:29:15.174 ' 00:29:15.174 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:15.529 ************************************ 00:29:15.529 START TEST nvmf_abort 00:29:15.529 ************************************ 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:15.529 * Looking for test storage... 00:29:15.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.529 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:15.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.530 --rc genhtml_branch_coverage=1 00:29:15.530 --rc genhtml_function_coverage=1 00:29:15.530 --rc genhtml_legend=1 00:29:15.530 --rc geninfo_all_blocks=1 00:29:15.530 --rc geninfo_unexecuted_blocks=1 00:29:15.530 00:29:15.530 ' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:15.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.530 --rc genhtml_branch_coverage=1 00:29:15.530 --rc genhtml_function_coverage=1 00:29:15.530 --rc genhtml_legend=1 00:29:15.530 --rc geninfo_all_blocks=1 00:29:15.530 --rc geninfo_unexecuted_blocks=1 00:29:15.530 00:29:15.530 ' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:15.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.530 --rc genhtml_branch_coverage=1 00:29:15.530 --rc genhtml_function_coverage=1 00:29:15.530 --rc genhtml_legend=1 00:29:15.530 --rc geninfo_all_blocks=1 00:29:15.530 --rc geninfo_unexecuted_blocks=1 00:29:15.530 00:29:15.530 ' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:15.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.530 --rc genhtml_branch_coverage=1 00:29:15.530 --rc genhtml_function_coverage=1 00:29:15.530 --rc genhtml_legend=1 00:29:15.530 --rc geninfo_all_blocks=1 00:29:15.530 --rc geninfo_unexecuted_blocks=1 00:29:15.530 00:29:15.530 ' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.530 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:23.685 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.685 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:23.686 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:23.686 Found net devices under 0000:31:00.0: cvl_0_0 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:23.686 Found net devices under 0000:31:00.1: cvl_0_1 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.686 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:29:23.686 00:29:23.686 --- 10.0.0.2 ping statistics --- 00:29:23.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.686 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:29:23.686 00:29:23.686 --- 10.0.0.1 ping statistics --- 00:29:23.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.686 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1419433 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1419433 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1419433 ']' 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:23.686 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:23.686 [2024-10-08 18:45:17.285801] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:23.686 [2024-10-08 18:45:17.286989] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:29:23.686 [2024-10-08 18:45:17.287044] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.686 [2024-10-08 18:45:17.379342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:23.686 [2024-10-08 18:45:17.473485] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.686 [2024-10-08 18:45:17.473542] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.686 [2024-10-08 18:45:17.473550] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.686 [2024-10-08 18:45:17.473557] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.686 [2024-10-08 18:45:17.473564] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.686 [2024-10-08 18:45:17.475098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.686 [2024-10-08 18:45:17.475404] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.686 [2024-10-08 18:45:17.475406] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.686 [2024-10-08 18:45:17.568456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:23.686 [2024-10-08 18:45:17.569516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:23.687 [2024-10-08 18:45:17.569842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:23.687 [2024-10-08 18:45:17.570008] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.256 [2024-10-08 18:45:18.148505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.256 Malloc0 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.256 Delay0 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.256 [2024-10-08 18:45:18.236393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.256 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:24.515 [2024-10-08 18:45:18.367732] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:27.054 Initializing NVMe Controllers 00:29:27.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:27.054 controller IO queue size 128 less than required 00:29:27.054 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:27.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:27.054 Initialization complete. Launching workers. 00:29:27.054 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28684 00:29:27.054 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28741, failed to submit 66 00:29:27.054 success 28684, unsuccessful 57, failed 0 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.054 rmmod nvme_tcp 00:29:27.054 rmmod nvme_fabrics 00:29:27.054 rmmod nvme_keyring 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1419433 ']' 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1419433 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1419433 ']' 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1419433 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1419433 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1419433' 00:29:27.054 killing process with pid 1419433 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1419433 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1419433 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:27.054 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:29:27.055 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.055 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.055 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.055 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.055 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.593 00:29:29.593 real 0m13.728s 00:29:29.593 user 0m11.380s 00:29:29.593 sys 0m7.236s 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.593 ************************************ 00:29:29.593 END TEST nvmf_abort 00:29:29.593 ************************************ 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:29.593 ************************************ 00:29:29.593 START TEST nvmf_ns_hotplug_stress 00:29:29.593 ************************************ 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:29.593 * Looking for test storage... 00:29:29.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.593 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:29.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.593 --rc genhtml_branch_coverage=1 00:29:29.593 --rc genhtml_function_coverage=1 00:29:29.593 --rc genhtml_legend=1 00:29:29.593 --rc geninfo_all_blocks=1 00:29:29.593 --rc geninfo_unexecuted_blocks=1 00:29:29.594 00:29:29.594 ' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:29.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.594 --rc genhtml_branch_coverage=1 00:29:29.594 --rc genhtml_function_coverage=1 00:29:29.594 --rc genhtml_legend=1 00:29:29.594 --rc geninfo_all_blocks=1 00:29:29.594 --rc geninfo_unexecuted_blocks=1 00:29:29.594 00:29:29.594 ' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:29.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.594 --rc genhtml_branch_coverage=1 00:29:29.594 --rc genhtml_function_coverage=1 00:29:29.594 --rc genhtml_legend=1 00:29:29.594 --rc geninfo_all_blocks=1 00:29:29.594 --rc geninfo_unexecuted_blocks=1 00:29:29.594 00:29:29.594 ' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:29.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.594 --rc genhtml_branch_coverage=1 00:29:29.594 --rc genhtml_function_coverage=1 00:29:29.594 --rc genhtml_legend=1 00:29:29.594 --rc geninfo_all_blocks=1 00:29:29.594 --rc geninfo_unexecuted_blocks=1 00:29:29.594 00:29:29.594 ' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.594 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:37.724 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:37.724 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:37.724 Found net devices under 0000:31:00.0: cvl_0_0 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:37.724 Found net devices under 0000:31:00.1: cvl_0_1 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.724 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:29:37.725 00:29:37.725 --- 10.0.0.2 ping statistics --- 00:29:37.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.725 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:29:37.725 00:29:37.725 --- 10.0.0.1 ping statistics --- 00:29:37.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.725 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:37.725 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:37.725 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1424421 00:29:37.725 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1424421 00:29:37.725 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:37.725 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1424421 ']' 00:29:37.725 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.725 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.725 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.725 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.725 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:37.725 [2024-10-08 18:45:31.062022] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:37.725 [2024-10-08 18:45:31.063167] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:29:37.725 [2024-10-08 18:45:31.063219] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.725 [2024-10-08 18:45:31.156619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:37.725 [2024-10-08 18:45:31.249197] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.725 [2024-10-08 18:45:31.249262] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.725 [2024-10-08 18:45:31.249272] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.725 [2024-10-08 18:45:31.249279] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.725 [2024-10-08 18:45:31.249285] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.725 [2024-10-08 18:45:31.250656] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.725 [2024-10-08 18:45:31.250814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.725 [2024-10-08 18:45:31.250814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.725 [2024-10-08 18:45:31.338857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:37.725 [2024-10-08 18:45:31.338904] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:37.725 [2024-10-08 18:45:31.339615] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:37.725 [2024-10-08 18:45:31.339808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:37.985 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:37.985 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:29:37.985 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:37.985 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:37.985 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:37.985 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.985 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:37.985 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:38.245 [2024-10-08 18:45:32.107725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.245 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:38.505 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.505 [2024-10-08 18:45:32.552447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.764 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:38.764 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:39.023 Malloc0 00:29:39.023 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:39.283 Delay0 00:29:39.283 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.547 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:39.547 NULL1 00:29:39.547 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:39.808 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1424879 00:29:39.808 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:39.808 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:39.808 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.187 Read completed with error (sct=0, sc=11) 00:29:41.187 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:41.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:41.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:41.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:41.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:41.187 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:41.187 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:41.447 true 00:29:41.447 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:41.447 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.384 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.384 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:42.384 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:42.644 true 00:29:42.644 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:42.644 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.903 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.903 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:42.903 18:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:43.163 true 00:29:43.163 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:43.163 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.547 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.547 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:44.547 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:44.547 true 00:29:44.547 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:44.547 18:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.487 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.747 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:45.747 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:45.747 true 00:29:45.747 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:45.747 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.008 18:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.268 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:46.268 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:46.268 true 00:29:46.268 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:46.268 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.646 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.647 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:47.647 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:47.906 true 00:29:47.906 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:47.906 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.847 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.847 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:48.847 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:49.107 true 00:29:49.107 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:49.107 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.366 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.625 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:49.625 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:49.625 true 00:29:49.625 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:49.625 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.885 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.144 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:50.144 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:50.144 true 00:29:50.144 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:50.144 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.404 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.666 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:50.666 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:50.666 true 00:29:50.666 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:50.666 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.926 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.187 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:51.187 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:51.187 true 00:29:51.187 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:51.188 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.127 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.386 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:52.386 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:52.386 true 00:29:52.386 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:52.386 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.645 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.903 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:52.903 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:52.903 true 00:29:52.903 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:52.903 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.163 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.424 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:53.424 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:53.424 true 00:29:53.424 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:53.424 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.683 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.943 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:53.943 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:53.943 true 00:29:54.202 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:54.202 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.202 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.464 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:54.464 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:54.724 true 00:29:54.724 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:54.724 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.724 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.983 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:54.983 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:55.241 true 00:29:55.241 18:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:55.241 18:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:55.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:55.241 18:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:55.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:55.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:55.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:55.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:55.500 18:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:55.500 18:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:55.759 true 00:29:55.759 18:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:55.759 18:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.700 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.700 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:56.700 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:56.959 true 00:29:56.959 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:56.959 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.959 18:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.219 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:57.219 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:57.478 true 00:29:57.478 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:57.478 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.478 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.739 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:57.739 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:57.999 true 00:29:57.999 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:57.999 18:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:58.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:58.938 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.938 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:58.938 18:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:59.198 true 00:29:59.198 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:59.198 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.458 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.458 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:59.458 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:59.718 true 00:29:59.718 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:29:59.718 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.097 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:01.097 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:01.097 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:01.097 true 00:30:01.357 18:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:01.357 18:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.296 18:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.296 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:02.296 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:02.296 true 00:30:02.555 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:02.555 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.555 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.815 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:02.815 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:02.815 true 00:30:03.078 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:03.079 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.079 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.339 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:03.339 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:03.339 true 00:30:03.597 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:03.597 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.597 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.857 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:03.857 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:04.116 true 00:30:04.116 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:04.116 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.055 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.314 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:05.314 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:05.573 true 00:30:05.573 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:05.573 18:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.512 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.512 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:06.512 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:06.772 true 00:30:06.772 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:06.772 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.033 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.033 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:07.033 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:07.294 true 00:30:07.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:07.294 18:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.673 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.673 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:08.673 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:08.673 true 00:30:08.673 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:08.673 18:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.671 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.671 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:09.671 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:09.932 true 00:30:09.932 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:09.932 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.191 Initializing NVMe Controllers 00:30:10.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.191 Controller IO queue size 128, less than required. 00:30:10.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.191 Controller IO queue size 128, less than required. 00:30:10.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:10.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:10.191 Initialization complete. Launching workers. 00:30:10.191 ======================================================== 00:30:10.191 Latency(us) 00:30:10.191 Device Information : IOPS MiB/s Average min max 00:30:10.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2294.08 1.12 32287.71 1461.17 1020319.27 00:30:10.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16664.68 8.14 7680.58 1124.41 400153.80 00:30:10.191 ======================================================== 00:30:10.191 Total : 18958.77 9.26 10658.14 1124.41 1020319.27 00:30:10.191 00:30:10.191 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.450 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:10.450 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:10.450 true 00:30:10.450 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1424879 00:30:10.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1424879) - No such process 00:30:10.450 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1424879 00:30:10.450 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.709 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:10.967 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:10.967 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:10.967 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:10.967 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:10.967 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:10.967 null0 00:30:10.967 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:10.967 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:10.967 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:11.225 null1 00:30:11.225 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:11.225 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:11.225 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:11.484 null2 00:30:11.484 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:11.484 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:11.484 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:11.484 null3 00:30:11.484 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:11.484 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:11.484 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:11.744 null4 00:30:11.744 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:11.744 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:11.744 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:11.744 null5 00:30:11.744 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:11.744 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:11.744 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:12.005 null6 00:30:12.005 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:12.005 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:12.005 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:12.266 null7 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1431224 1431226 1431228 1431231 1431233 1431235 1431237 1431238 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:12.266 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.527 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:12.528 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.528 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.528 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:12.528 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.528 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.528 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:12.528 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.528 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.528 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.821 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:13.118 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:13.118 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.118 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:13.118 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:13.118 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:13.118 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:13.118 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:13.118 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:13.398 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:13.657 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:13.657 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:13.657 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:13.657 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:13.657 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.658 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.917 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.176 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.176 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.176 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:14.176 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.176 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.176 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:14.176 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:14.177 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.439 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.703 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:14.964 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:15.224 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.484 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:15.744 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:16.005 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:16.005 rmmod nvme_tcp 00:30:16.005 rmmod nvme_fabrics 00:30:16.005 rmmod nvme_keyring 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1424421 ']' 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1424421 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1424421 ']' 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1424421 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1424421 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1424421' 00:30:16.266 killing process with pid 1424421 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1424421 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1424421 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.266 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.810 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:18.810 00:30:18.810 real 0m49.241s 00:30:18.810 user 2m57.471s 00:30:18.810 sys 0m20.535s 00:30:18.810 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:18.810 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:18.811 ************************************ 00:30:18.811 END TEST nvmf_ns_hotplug_stress 00:30:18.811 ************************************ 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:18.811 ************************************ 00:30:18.811 START TEST nvmf_delete_subsystem 00:30:18.811 ************************************ 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:18.811 * Looking for test storage... 00:30:18.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:18.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.811 --rc genhtml_branch_coverage=1 00:30:18.811 --rc genhtml_function_coverage=1 00:30:18.811 --rc genhtml_legend=1 00:30:18.811 --rc geninfo_all_blocks=1 00:30:18.811 --rc geninfo_unexecuted_blocks=1 00:30:18.811 00:30:18.811 ' 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:18.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.811 --rc genhtml_branch_coverage=1 00:30:18.811 --rc genhtml_function_coverage=1 00:30:18.811 --rc genhtml_legend=1 00:30:18.811 --rc geninfo_all_blocks=1 00:30:18.811 --rc geninfo_unexecuted_blocks=1 00:30:18.811 00:30:18.811 ' 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:18.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.811 --rc genhtml_branch_coverage=1 00:30:18.811 --rc genhtml_function_coverage=1 00:30:18.811 --rc genhtml_legend=1 00:30:18.811 --rc geninfo_all_blocks=1 00:30:18.811 --rc geninfo_unexecuted_blocks=1 00:30:18.811 00:30:18.811 ' 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:18.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.811 --rc genhtml_branch_coverage=1 00:30:18.811 --rc genhtml_function_coverage=1 00:30:18.811 --rc genhtml_legend=1 00:30:18.811 --rc geninfo_all_blocks=1 00:30:18.811 --rc geninfo_unexecuted_blocks=1 00:30:18.811 00:30:18.811 ' 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.811 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.812 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:26.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:26.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:26.953 Found net devices under 0000:31:00.0: cvl_0_0 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:26.953 Found net devices under 0000:31:00.1: cvl_0_1 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.953 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.954 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.954 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.954 18:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:30:26.954 00:30:26.954 --- 10.0.0.2 ping statistics --- 00:30:26.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.954 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:30:26.954 00:30:26.954 --- 10.0.0.1 ping statistics --- 00:30:26.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.954 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1436324 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1436324 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1436324 ']' 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:26.954 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:26.954 [2024-10-08 18:46:20.383888] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:26.954 [2024-10-08 18:46:20.385053] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:30:26.954 [2024-10-08 18:46:20.385105] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.954 [2024-10-08 18:46:20.476369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:26.954 [2024-10-08 18:46:20.571011] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.954 [2024-10-08 18:46:20.571074] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.954 [2024-10-08 18:46:20.571083] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.954 [2024-10-08 18:46:20.571090] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.954 [2024-10-08 18:46:20.571096] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.954 [2024-10-08 18:46:20.572148] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.954 [2024-10-08 18:46:20.572312] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.954 [2024-10-08 18:46:20.649458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:26.954 [2024-10-08 18:46:20.650222] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:26.954 [2024-10-08 18:46:20.650456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:27.215 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:27.215 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:30:27.215 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:27.215 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:27.216 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.216 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.216 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.216 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.216 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.216 [2024-10-08 18:46:21.249053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.216 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.216 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:27.216 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.216 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.477 [2024-10-08 18:46:21.289579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.477 NULL1 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.477 Delay0 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1436492 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:27.477 18:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:27.477 [2024-10-08 18:46:21.402636] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:29.392 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.392 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.392 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 [2024-10-08 18:46:23.485085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd1fd0 is same with the state(6) to be set 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Write completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 starting I/O failed: -6 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 Read completed with error (sct=0, sc=8) 00:30:29.654 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Read completed with error (sct=0, sc=8) 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 starting I/O failed: -6 00:30:29.655 Write completed with error (sct=0, sc=8) 00:30:29.655 [2024-10-08 18:46:23.490297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8314000c10 is same with the state(6) to be set 00:30:30.593 [2024-10-08 18:46:24.461555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd36b0 is same with the state(6) to be set 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 [2024-10-08 18:46:24.488411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd21b0 is same with the state(6) to be set 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 [2024-10-08 18:46:24.488953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd26c0 is same with the state(6) to be set 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Write completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.593 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 [2024-10-08 18:46:24.492113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f831400d650 is same with the state(6) to be set 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Write completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 Read completed with error (sct=0, sc=8) 00:30:30.594 [2024-10-08 18:46:24.492676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f831400cff0 is same with the state(6) to be set 00:30:30.594 Initializing NVMe Controllers 00:30:30.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:30.594 Controller IO queue size 128, less than required. 00:30:30.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:30.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:30.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:30.594 Initialization complete. Launching workers. 00:30:30.594 ======================================================== 00:30:30.594 Latency(us) 00:30:30.594 Device Information : IOPS MiB/s Average min max 00:30:30.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.22 0.08 906873.47 388.82 1007018.31 00:30:30.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.16 0.09 918041.30 444.54 1012419.14 00:30:30.594 ======================================================== 00:30:30.594 Total : 342.38 0.17 912684.64 388.82 1012419.14 00:30:30.594 00:30:30.594 [2024-10-08 18:46:24.493115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd36b0 (9): Bad file descriptor 00:30:30.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:30.594 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.594 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:30.594 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1436492 00:30:30.594 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:31.162 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:31.162 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1436492 00:30:31.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1436492) - No such process 00:30:31.162 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1436492 00:30:31.162 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:31.162 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1436492 00:30:31.162 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:31.162 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:31.162 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1436492 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.162 [2024-10-08 18:46:25.025680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1437159 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1437159 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:31.162 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:31.162 [2024-10-08 18:46:25.112400] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:31.733 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:31.733 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1437159 00:30:31.733 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:32.303 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:32.303 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1437159 00:30:32.303 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:32.563 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:32.563 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1437159 00:30:32.563 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:33.130 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:33.130 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1437159 00:30:33.130 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:33.699 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:33.699 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1437159 00:30:33.699 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:34.270 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:34.270 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1437159 00:30:34.270 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:34.530 Initializing NVMe Controllers 00:30:34.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.530 Controller IO queue size 128, less than required. 00:30:34.530 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:34.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:34.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:34.530 Initialization complete. Launching workers. 00:30:34.530 ======================================================== 00:30:34.530 Latency(us) 00:30:34.530 Device Information : IOPS MiB/s Average min max 00:30:34.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002184.27 1000287.67 1007175.49 00:30:34.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003837.92 1000399.44 1010118.28 00:30:34.530 ======================================================== 00:30:34.530 Total : 256.00 0.12 1003011.10 1000287.67 1010118.28 00:30:34.530 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1437159 00:30:34.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1437159) - No such process 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1437159 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:34.530 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:34.790 rmmod nvme_tcp 00:30:34.790 rmmod nvme_fabrics 00:30:34.790 rmmod nvme_keyring 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1436324 ']' 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1436324 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1436324 ']' 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1436324 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1436324 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1436324' 00:30:34.790 killing process with pid 1436324 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1436324 00:30:34.790 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1436324 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.050 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.959 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.959 00:30:36.959 real 0m18.490s 00:30:36.959 user 0m26.796s 00:30:36.959 sys 0m7.361s 00:30:36.959 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:36.959 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:36.959 ************************************ 00:30:36.959 END TEST nvmf_delete_subsystem 00:30:36.959 ************************************ 00:30:36.959 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:36.959 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:36.959 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:36.959 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:37.221 ************************************ 00:30:37.221 START TEST nvmf_host_management 00:30:37.221 ************************************ 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:37.221 * Looking for test storage... 00:30:37.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:37.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.221 --rc genhtml_branch_coverage=1 00:30:37.221 --rc genhtml_function_coverage=1 00:30:37.221 --rc genhtml_legend=1 00:30:37.221 --rc geninfo_all_blocks=1 00:30:37.221 --rc geninfo_unexecuted_blocks=1 00:30:37.221 00:30:37.221 ' 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:37.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.221 --rc genhtml_branch_coverage=1 00:30:37.221 --rc genhtml_function_coverage=1 00:30:37.221 --rc genhtml_legend=1 00:30:37.221 --rc geninfo_all_blocks=1 00:30:37.221 --rc geninfo_unexecuted_blocks=1 00:30:37.221 00:30:37.221 ' 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:37.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.221 --rc genhtml_branch_coverage=1 00:30:37.221 --rc genhtml_function_coverage=1 00:30:37.221 --rc genhtml_legend=1 00:30:37.221 --rc geninfo_all_blocks=1 00:30:37.221 --rc geninfo_unexecuted_blocks=1 00:30:37.221 00:30:37.221 ' 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:37.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.221 --rc genhtml_branch_coverage=1 00:30:37.221 --rc genhtml_function_coverage=1 00:30:37.221 --rc genhtml_legend=1 00:30:37.221 --rc geninfo_all_blocks=1 00:30:37.221 --rc geninfo_unexecuted_blocks=1 00:30:37.221 00:30:37.221 ' 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.221 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:37.222 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.356 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:45.357 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:45.357 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:45.357 Found net devices under 0000:31:00.0: cvl_0_0 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:45.357 Found net devices under 0000:31:00.1: cvl_0_1 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:30:45.357 00:30:45.357 --- 10.0.0.2 ping statistics --- 00:30:45.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.357 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:30:45.357 00:30:45.357 --- 10.0.0.1 ping statistics --- 00:30:45.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.357 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.357 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1442221 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1442221 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1442221 ']' 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.358 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.358 [2024-10-08 18:46:39.049959] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.358 [2024-10-08 18:46:39.051127] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:30:45.358 [2024-10-08 18:46:39.051178] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.358 [2024-10-08 18:46:39.141095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:45.358 [2024-10-08 18:46:39.234657] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.358 [2024-10-08 18:46:39.234719] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.358 [2024-10-08 18:46:39.234728] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.358 [2024-10-08 18:46:39.234734] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.358 [2024-10-08 18:46:39.234741] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.358 [2024-10-08 18:46:39.236805] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:45.358 [2024-10-08 18:46:39.236968] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:45.358 [2024-10-08 18:46:39.237098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:30:45.358 [2024-10-08 18:46:39.237278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.358 [2024-10-08 18:46:39.321287] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.358 [2024-10-08 18:46:39.322168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.358 [2024-10-08 18:46:39.322452] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:45.358 [2024-10-08 18:46:39.322895] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:45.358 [2024-10-08 18:46:39.322938] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.928 [2024-10-08 18:46:39.906497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.928 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.928 Malloc0 00:30:46.190 [2024-10-08 18:46:39.994788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1442350 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1442350 /var/tmp/bdevperf.sock 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1442350 ']' 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:46.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:46.190 { 00:30:46.190 "params": { 00:30:46.190 "name": "Nvme$subsystem", 00:30:46.190 "trtype": "$TEST_TRANSPORT", 00:30:46.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.190 "adrfam": "ipv4", 00:30:46.190 "trsvcid": "$NVMF_PORT", 00:30:46.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.190 "hdgst": ${hdgst:-false}, 00:30:46.190 "ddgst": ${ddgst:-false} 00:30:46.190 }, 00:30:46.190 "method": "bdev_nvme_attach_controller" 00:30:46.190 } 00:30:46.190 EOF 00:30:46.190 )") 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:46.190 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:46.190 "params": { 00:30:46.190 "name": "Nvme0", 00:30:46.190 "trtype": "tcp", 00:30:46.190 "traddr": "10.0.0.2", 00:30:46.190 "adrfam": "ipv4", 00:30:46.190 "trsvcid": "4420", 00:30:46.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:46.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:46.190 "hdgst": false, 00:30:46.190 "ddgst": false 00:30:46.190 }, 00:30:46.190 "method": "bdev_nvme_attach_controller" 00:30:46.190 }' 00:30:46.190 [2024-10-08 18:46:40.106291] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:30:46.190 [2024-10-08 18:46:40.106363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442350 ] 00:30:46.190 [2024-10-08 18:46:40.169762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.451 [2024-10-08 18:46:40.257792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.451 Running I/O for 10 seconds... 00:30:46.451 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:46.451 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:46.451 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:46.451 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.451 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:30:46.714 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.977 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.977 [2024-10-08 18:46:40.878175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.977 [2024-10-08 18:46:40.878239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.977 [2024-10-08 18:46:40.878248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.977 [2024-10-08 18:46:40.878256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.977 [2024-10-08 18:46:40.878264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.977 [2024-10-08 18:46:40.878280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.977 [2024-10-08 18:46:40.878288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.977 [2024-10-08 18:46:40.878295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.977 [2024-10-08 18:46:40.878302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.878681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c960 is same with the state(6) to be set 00:30:46.978 [2024-10-08 18:46:40.879110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.978 [2024-10-08 18:46:40.879372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.978 [2024-10-08 18:46:40.879380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.979 [2024-10-08 18:46:40.879882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.979 [2024-10-08 18:46:40.879887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.980 [2024-10-08 18:46:40.879894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.980 [2024-10-08 18:46:40.879899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.980 [2024-10-08 18:46:40.879906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.980 [2024-10-08 18:46:40.879911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.980 [2024-10-08 18:46:40.879918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.980 [2024-10-08 18:46:40.879923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.980 [2024-10-08 18:46:40.879931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.980 [2024-10-08 18:46:40.879937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.980 [2024-10-08 18:46:40.879943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.980 [2024-10-08 18:46:40.879949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.980 [2024-10-08 18:46:40.879956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.980 [2024-10-08 18:46:40.879961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.980 [2024-10-08 18:46:40.879968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa55f60 is same with the state(6) to be set 00:30:46.980 [2024-10-08 18:46:40.880042] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa55f60 was disconnected and freed. reset controller. 00:30:46.980 [2024-10-08 18:46:40.880924] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:46.980 task offset: 81920 on job bdev=Nvme0n1 fails 00:30:46.980 00:30:46.980 Latency(us) 00:30:46.980 [2024-10-08T16:46:41.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.980 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:46.980 Job: Nvme0n1 ended in about 0.42 seconds with error 00:30:46.980 Verification LBA range: start 0x0 length 0x400 00:30:46.980 Nvme0n1 : 0.42 1517.68 94.86 151.77 0.00 37490.04 9666.56 32986.45 00:30:46.980 [2024-10-08T16:46:41.037Z] =================================================================================================================== 00:30:46.980 [2024-10-08T16:46:41.037Z] Total : 1517.68 94.86 151.77 0.00 37490.04 9666.56 32986.45 00:30:46.980 [2024-10-08 18:46:40.882603] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:46.980 [2024-10-08 18:46:40.882642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d100 (9): Bad file descriptor 00:30:46.980 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.980 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:46.980 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.980 [2024-10-08 18:46:40.884177] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:46.980 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.980 [2024-10-08 18:46:40.884280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:46.980 [2024-10-08 18:46:40.884317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.980 [2024-10-08 18:46:40.884330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:46.980 [2024-10-08 18:46:40.884338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:46.980 [2024-10-08 18:46:40.884344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.980 [2024-10-08 18:46:40.884349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x83d100 00:30:46.980 [2024-10-08 18:46:40.884375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d100 (9): Bad file descriptor 00:30:46.980 [2024-10-08 18:46:40.884386] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:46.980 [2024-10-08 18:46:40.884393] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:46.980 [2024-10-08 18:46:40.884400] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:46.980 [2024-10-08 18:46:40.884413] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.980 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.980 18:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1442350 00:30:47.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1442350) - No such process 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:47.919 { 00:30:47.919 "params": { 00:30:47.919 "name": "Nvme$subsystem", 00:30:47.919 "trtype": "$TEST_TRANSPORT", 00:30:47.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.919 "adrfam": "ipv4", 00:30:47.919 "trsvcid": "$NVMF_PORT", 00:30:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.919 "hdgst": ${hdgst:-false}, 00:30:47.919 "ddgst": ${ddgst:-false} 00:30:47.919 }, 00:30:47.919 "method": "bdev_nvme_attach_controller" 00:30:47.919 } 00:30:47.919 EOF 00:30:47.919 )") 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:47.919 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:47.919 "params": { 00:30:47.919 "name": "Nvme0", 00:30:47.919 "trtype": "tcp", 00:30:47.919 "traddr": "10.0.0.2", 00:30:47.919 "adrfam": "ipv4", 00:30:47.919 "trsvcid": "4420", 00:30:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:47.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:47.919 "hdgst": false, 00:30:47.919 "ddgst": false 00:30:47.919 }, 00:30:47.919 "method": "bdev_nvme_attach_controller" 00:30:47.919 }' 00:30:47.919 [2024-10-08 18:46:41.954436] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:30:47.919 [2024-10-08 18:46:41.954497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442723 ] 00:30:48.178 [2024-10-08 18:46:42.032191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.178 [2024-10-08 18:46:42.096850] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.437 Running I/O for 1 seconds... 00:30:49.377 1788.00 IOPS, 111.75 MiB/s 00:30:49.377 Latency(us) 00:30:49.377 [2024-10-08T16:46:43.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.377 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:49.377 Verification LBA range: start 0x0 length 0x400 00:30:49.377 Nvme0n1 : 1.01 1827.98 114.25 0.00 0.00 34272.41 2034.35 36700.16 00:30:49.377 [2024-10-08T16:46:43.434Z] =================================================================================================================== 00:30:49.377 [2024-10-08T16:46:43.434Z] Total : 1827.98 114.25 0.00 0.00 34272.41 2034.35 36700.16 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:49.637 rmmod nvme_tcp 00:30:49.637 rmmod nvme_fabrics 00:30:49.637 rmmod nvme_keyring 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1442221 ']' 00:30:49.637 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1442221 00:30:49.638 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1442221 ']' 00:30:49.638 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1442221 00:30:49.638 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:30:49.638 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:49.638 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1442221 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1442221' 00:30:49.897 killing process with pid 1442221 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1442221 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1442221 00:30:49.897 [2024-10-08 18:46:43.819925] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.897 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.450 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:52.450 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:52.450 00:30:52.450 real 0m14.912s 00:30:52.450 user 0m19.431s 00:30:52.450 sys 0m7.579s 00:30:52.450 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:52.450 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:52.450 ************************************ 00:30:52.450 END TEST nvmf_host_management 00:30:52.450 ************************************ 00:30:52.450 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:52.450 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:52.450 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:52.450 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:52.450 ************************************ 00:30:52.450 START TEST nvmf_lvol 00:30:52.450 ************************************ 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:52.450 * Looking for test storage... 00:30:52.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.450 --rc genhtml_branch_coverage=1 00:30:52.450 --rc genhtml_function_coverage=1 00:30:52.450 --rc genhtml_legend=1 00:30:52.450 --rc geninfo_all_blocks=1 00:30:52.450 --rc geninfo_unexecuted_blocks=1 00:30:52.450 00:30:52.450 ' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.450 --rc genhtml_branch_coverage=1 00:30:52.450 --rc genhtml_function_coverage=1 00:30:52.450 --rc genhtml_legend=1 00:30:52.450 --rc geninfo_all_blocks=1 00:30:52.450 --rc geninfo_unexecuted_blocks=1 00:30:52.450 00:30:52.450 ' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.450 --rc genhtml_branch_coverage=1 00:30:52.450 --rc genhtml_function_coverage=1 00:30:52.450 --rc genhtml_legend=1 00:30:52.450 --rc geninfo_all_blocks=1 00:30:52.450 --rc geninfo_unexecuted_blocks=1 00:30:52.450 00:30:52.450 ' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.450 --rc genhtml_branch_coverage=1 00:30:52.450 --rc genhtml_function_coverage=1 00:30:52.450 --rc genhtml_legend=1 00:30:52.450 --rc geninfo_all_blocks=1 00:30:52.450 --rc geninfo_unexecuted_blocks=1 00:30:52.450 00:30:52.450 ' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.450 18:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:00.584 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:00.584 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:00.585 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:00.585 Found net devices under 0000:31:00.0: cvl_0_0 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:00.585 Found net devices under 0000:31:00.1: cvl_0_1 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:00.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:31:00.585 00:31:00.585 --- 10.0.0.2 ping statistics --- 00:31:00.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.585 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:31:00.585 00:31:00.585 --- 10.0.0.1 ping statistics --- 00:31:00.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.585 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1447352 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1447352 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1447352 ']' 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:00.585 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:00.585 [2024-10-08 18:46:53.743496] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:00.585 [2024-10-08 18:46:53.744474] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:31:00.585 [2024-10-08 18:46:53.744510] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.585 [2024-10-08 18:46:53.828520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:00.585 [2024-10-08 18:46:53.905143] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.585 [2024-10-08 18:46:53.905205] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.585 [2024-10-08 18:46:53.905213] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.585 [2024-10-08 18:46:53.905220] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.585 [2024-10-08 18:46:53.905225] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.585 [2024-10-08 18:46:53.906774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.585 [2024-10-08 18:46:53.906933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.585 [2024-10-08 18:46:53.906933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.585 [2024-10-08 18:46:53.987446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:00.586 [2024-10-08 18:46:53.988469] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:00.586 [2024-10-08 18:46:53.988839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:00.586 [2024-10-08 18:46:53.988991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:00.586 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:00.586 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:31:00.586 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:00.586 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:00.586 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:00.586 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.586 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:00.846 [2024-10-08 18:46:54.771821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.846 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:01.105 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:01.105 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:01.364 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:01.364 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:01.624 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:01.624 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b16cc8c6-e703-4447-b333-d0872a7f4e48 00:31:01.624 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b16cc8c6-e703-4447-b333-d0872a7f4e48 lvol 20 00:31:01.883 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e0cca0d4-d695-4860-b09e-ea351ab22160 00:31:01.884 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:02.143 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e0cca0d4-d695-4860-b09e-ea351ab22160 00:31:02.403 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.403 [2024-10-08 18:46:56.371794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.403 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:02.662 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:02.662 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1447896 00:31:02.662 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:03.604 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e0cca0d4-d695-4860-b09e-ea351ab22160 MY_SNAPSHOT 00:31:03.864 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ad018090-0819-4ae4-9c87-af5b18d739c9 00:31:03.865 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e0cca0d4-d695-4860-b09e-ea351ab22160 30 00:31:04.124 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ad018090-0819-4ae4-9c87-af5b18d739c9 MY_CLONE 00:31:04.384 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ac33a2f3-ce56-4abf-ba5e-8120da5d3271 00:31:04.384 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ac33a2f3-ce56-4abf-ba5e-8120da5d3271 00:31:04.952 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1447896 00:31:13.207 Initializing NVMe Controllers 00:31:13.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:13.207 Controller IO queue size 128, less than required. 00:31:13.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:13.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:13.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:13.207 Initialization complete. Launching workers. 00:31:13.207 ======================================================== 00:31:13.207 Latency(us) 00:31:13.207 Device Information : IOPS MiB/s Average min max 00:31:13.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15193.50 59.35 8426.45 1696.70 110139.70 00:31:13.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15132.00 59.11 8459.01 3927.51 59545.46 00:31:13.207 ======================================================== 00:31:13.207 Total : 30325.50 118.46 8442.69 1696.70 110139.70 00:31:13.207 00:31:13.207 18:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:13.207 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e0cca0d4-d695-4860-b09e-ea351ab22160 00:31:13.207 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b16cc8c6-e703-4447-b333-d0872a7f4e48 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.466 rmmod nvme_tcp 00:31:13.466 rmmod nvme_fabrics 00:31:13.466 rmmod nvme_keyring 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1447352 ']' 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1447352 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1447352 ']' 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1447352 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1447352 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1447352' 00:31:13.466 killing process with pid 1447352 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1447352 00:31:13.466 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1447352 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.727 18:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.269 00:31:16.269 real 0m23.695s 00:31:16.269 user 0m55.621s 00:31:16.269 sys 0m10.500s 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:16.269 ************************************ 00:31:16.269 END TEST nvmf_lvol 00:31:16.269 ************************************ 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:16.269 ************************************ 00:31:16.269 START TEST nvmf_lvs_grow 00:31:16.269 ************************************ 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:16.269 * Looking for test storage... 00:31:16.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:16.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.269 --rc genhtml_branch_coverage=1 00:31:16.269 --rc genhtml_function_coverage=1 00:31:16.269 --rc genhtml_legend=1 00:31:16.269 --rc geninfo_all_blocks=1 00:31:16.269 --rc geninfo_unexecuted_blocks=1 00:31:16.269 00:31:16.269 ' 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:16.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.269 --rc genhtml_branch_coverage=1 00:31:16.269 --rc genhtml_function_coverage=1 00:31:16.269 --rc genhtml_legend=1 00:31:16.269 --rc geninfo_all_blocks=1 00:31:16.269 --rc geninfo_unexecuted_blocks=1 00:31:16.269 00:31:16.269 ' 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:16.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.269 --rc genhtml_branch_coverage=1 00:31:16.269 --rc genhtml_function_coverage=1 00:31:16.269 --rc genhtml_legend=1 00:31:16.269 --rc geninfo_all_blocks=1 00:31:16.269 --rc geninfo_unexecuted_blocks=1 00:31:16.269 00:31:16.269 ' 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:16.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.269 --rc genhtml_branch_coverage=1 00:31:16.269 --rc genhtml_function_coverage=1 00:31:16.269 --rc genhtml_legend=1 00:31:16.269 --rc geninfo_all_blocks=1 00:31:16.269 --rc geninfo_unexecuted_blocks=1 00:31:16.269 00:31:16.269 ' 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.269 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.269 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:16.269 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:16.269 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.269 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.269 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.269 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.269 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.269 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.269 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.270 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:24.409 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:24.409 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:24.409 Found net devices under 0000:31:00.0: cvl_0_0 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:24.409 Found net devices under 0000:31:00.1: cvl_0_1 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:24.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:31:24.409 00:31:24.409 --- 10.0.0.2 ping statistics --- 00:31:24.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.409 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:31:24.409 00:31:24.409 --- 10.0.0.1 ping statistics --- 00:31:24.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.409 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:24.409 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1454150 00:31:24.410 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1454150 00:31:24.410 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:24.410 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1454150 ']' 00:31:24.410 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.410 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:24.410 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.410 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:24.410 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:24.410 [2024-10-08 18:47:17.774822] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:24.410 [2024-10-08 18:47:17.775969] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:31:24.410 [2024-10-08 18:47:17.776024] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.410 [2024-10-08 18:47:17.866852] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.410 [2024-10-08 18:47:17.962240] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.410 [2024-10-08 18:47:17.962294] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.410 [2024-10-08 18:47:17.962303] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.410 [2024-10-08 18:47:17.962311] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.410 [2024-10-08 18:47:17.962317] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.410 [2024-10-08 18:47:17.963078] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.410 [2024-10-08 18:47:18.039690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:24.410 [2024-10-08 18:47:18.039997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:24.669 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:24.669 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:31:24.669 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:24.669 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:24.669 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:24.669 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.669 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:24.929 [2024-10-08 18:47:18.787955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:24.929 ************************************ 00:31:24.929 START TEST lvs_grow_clean 00:31:24.929 ************************************ 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:24.929 18:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:25.189 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:25.189 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:25.449 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:25.449 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:25.449 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:25.449 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:25.449 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:25.449 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e lvol 150 00:31:25.710 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6ac74c94-9943-4295-a805-59d7f839e469 00:31:25.710 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:25.710 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:25.971 [2024-10-08 18:47:19.771640] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:25.971 [2024-10-08 18:47:19.771805] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:25.971 true 00:31:25.971 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:25.971 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:25.971 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:25.971 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:26.232 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6ac74c94-9943-4295-a805-59d7f839e469 00:31:26.493 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.493 [2024-10-08 18:47:20.524341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.493 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:26.753 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1454854 00:31:26.753 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:26.753 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:26.753 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1454854 /var/tmp/bdevperf.sock 00:31:26.753 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1454854 ']' 00:31:26.753 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:26.753 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:26.753 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:26.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:26.753 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:26.753 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:26.753 [2024-10-08 18:47:20.779919] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:31:26.754 [2024-10-08 18:47:20.780002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454854 ] 00:31:27.013 [2024-10-08 18:47:20.864158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.013 [2024-10-08 18:47:20.958278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.583 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:27.583 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:31:27.583 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:28.153 Nvme0n1 00:31:28.153 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:28.153 [ 00:31:28.153 { 00:31:28.153 "name": "Nvme0n1", 00:31:28.153 "aliases": [ 00:31:28.153 "6ac74c94-9943-4295-a805-59d7f839e469" 00:31:28.153 ], 00:31:28.153 "product_name": "NVMe disk", 00:31:28.153 "block_size": 4096, 00:31:28.153 "num_blocks": 38912, 00:31:28.153 "uuid": "6ac74c94-9943-4295-a805-59d7f839e469", 00:31:28.153 "numa_id": 0, 00:31:28.153 "assigned_rate_limits": { 00:31:28.153 "rw_ios_per_sec": 0, 00:31:28.153 "rw_mbytes_per_sec": 0, 00:31:28.153 "r_mbytes_per_sec": 0, 00:31:28.153 "w_mbytes_per_sec": 0 00:31:28.153 }, 00:31:28.153 "claimed": false, 00:31:28.153 "zoned": false, 00:31:28.153 "supported_io_types": { 00:31:28.153 "read": true, 00:31:28.153 "write": true, 00:31:28.153 "unmap": true, 00:31:28.153 "flush": true, 00:31:28.153 "reset": true, 00:31:28.153 "nvme_admin": true, 00:31:28.153 "nvme_io": true, 00:31:28.153 "nvme_io_md": false, 00:31:28.153 "write_zeroes": true, 00:31:28.153 "zcopy": false, 00:31:28.153 "get_zone_info": false, 00:31:28.153 "zone_management": false, 00:31:28.153 "zone_append": false, 00:31:28.153 "compare": true, 00:31:28.153 "compare_and_write": true, 00:31:28.153 "abort": true, 00:31:28.153 "seek_hole": false, 00:31:28.153 "seek_data": false, 00:31:28.153 "copy": true, 00:31:28.153 "nvme_iov_md": false 00:31:28.153 }, 00:31:28.153 "memory_domains": [ 00:31:28.153 { 00:31:28.153 "dma_device_id": "system", 00:31:28.153 "dma_device_type": 1 00:31:28.153 } 00:31:28.153 ], 00:31:28.153 "driver_specific": { 00:31:28.153 "nvme": [ 00:31:28.153 { 00:31:28.153 "trid": { 00:31:28.153 "trtype": "TCP", 00:31:28.153 "adrfam": "IPv4", 00:31:28.153 "traddr": "10.0.0.2", 00:31:28.153 "trsvcid": "4420", 00:31:28.153 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:28.153 }, 00:31:28.153 "ctrlr_data": { 00:31:28.153 "cntlid": 1, 00:31:28.153 "vendor_id": "0x8086", 00:31:28.154 "model_number": "SPDK bdev Controller", 00:31:28.154 "serial_number": "SPDK0", 00:31:28.154 "firmware_revision": "25.01", 00:31:28.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:28.154 "oacs": { 00:31:28.154 "security": 0, 00:31:28.154 "format": 0, 00:31:28.154 "firmware": 0, 00:31:28.154 "ns_manage": 0 00:31:28.154 }, 00:31:28.154 "multi_ctrlr": true, 00:31:28.154 "ana_reporting": false 00:31:28.154 }, 00:31:28.154 "vs": { 00:31:28.154 "nvme_version": "1.3" 00:31:28.154 }, 00:31:28.154 "ns_data": { 00:31:28.154 "id": 1, 00:31:28.154 "can_share": true 00:31:28.154 } 00:31:28.154 } 00:31:28.154 ], 00:31:28.154 "mp_policy": "active_passive" 00:31:28.154 } 00:31:28.154 } 00:31:28.154 ] 00:31:28.154 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1455119 00:31:28.154 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:28.154 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:28.414 Running I/O for 10 seconds... 00:31:29.355 Latency(us) 00:31:29.355 [2024-10-08T16:47:23.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:29.355 Nvme0n1 : 1.00 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:31:29.355 [2024-10-08T16:47:23.412Z] =================================================================================================================== 00:31:29.355 [2024-10-08T16:47:23.412Z] Total : 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:31:29.355 00:31:30.294 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:30.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:30.294 Nvme0n1 : 2.00 17081.50 66.72 0.00 0.00 0.00 0.00 0.00 00:31:30.294 [2024-10-08T16:47:24.351Z] =================================================================================================================== 00:31:30.294 [2024-10-08T16:47:24.351Z] Total : 17081.50 66.72 0.00 0.00 0.00 0.00 0.00 00:31:30.294 00:31:30.294 true 00:31:30.608 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:30.608 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:30.608 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:30.608 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:30.608 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1455119 00:31:31.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:31.546 Nvme0n1 : 3.00 17272.00 67.47 0.00 0.00 0.00 0.00 0.00 00:31:31.546 [2024-10-08T16:47:25.603Z] =================================================================================================================== 00:31:31.546 [2024-10-08T16:47:25.603Z] Total : 17272.00 67.47 0.00 0.00 0.00 0.00 0.00 00:31:31.546 00:31:32.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.484 Nvme0n1 : 4.00 18002.25 70.32 0.00 0.00 0.00 0.00 0.00 00:31:32.484 [2024-10-08T16:47:26.541Z] =================================================================================================================== 00:31:32.484 [2024-10-08T16:47:26.541Z] Total : 18002.25 70.32 0.00 0.00 0.00 0.00 0.00 00:31:32.484 00:31:33.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:33.421 Nvme0n1 : 5.00 19481.80 76.10 0.00 0.00 0.00 0.00 0.00 00:31:33.421 [2024-10-08T16:47:27.478Z] =================================================================================================================== 00:31:33.421 [2024-10-08T16:47:27.478Z] Total : 19481.80 76.10 0.00 0.00 0.00 0.00 0.00 00:31:33.421 00:31:34.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:34.360 Nvme0n1 : 6.00 20484.50 80.02 0.00 0.00 0.00 0.00 0.00 00:31:34.360 [2024-10-08T16:47:28.417Z] =================================================================================================================== 00:31:34.360 [2024-10-08T16:47:28.417Z] Total : 20484.50 80.02 0.00 0.00 0.00 0.00 0.00 00:31:34.360 00:31:35.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:35.301 Nvme0n1 : 7.00 21193.29 82.79 0.00 0.00 0.00 0.00 0.00 00:31:35.301 [2024-10-08T16:47:29.358Z] =================================================================================================================== 00:31:35.301 [2024-10-08T16:47:29.358Z] Total : 21193.29 82.79 0.00 0.00 0.00 0.00 0.00 00:31:35.301 00:31:36.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.241 Nvme0n1 : 8.00 21735.00 84.90 0.00 0.00 0.00 0.00 0.00 00:31:36.241 [2024-10-08T16:47:30.298Z] =================================================================================================================== 00:31:36.241 [2024-10-08T16:47:30.298Z] Total : 21735.00 84.90 0.00 0.00 0.00 0.00 0.00 00:31:36.241 00:31:37.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:37.625 Nvme0n1 : 9.00 22149.33 86.52 0.00 0.00 0.00 0.00 0.00 00:31:37.625 [2024-10-08T16:47:31.682Z] =================================================================================================================== 00:31:37.625 [2024-10-08T16:47:31.682Z] Total : 22149.33 86.52 0.00 0.00 0.00 0.00 0.00 00:31:37.625 00:31:38.566 00:31:38.566 Latency(us) 00:31:38.566 [2024-10-08T16:47:32.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:38.566 Nvme0n1 : 10.00 22488.36 87.85 0.00 0.00 5688.66 2880.85 31675.73 00:31:38.566 [2024-10-08T16:47:32.623Z] =================================================================================================================== 00:31:38.566 [2024-10-08T16:47:32.623Z] Total : 22488.36 87.85 0.00 0.00 5688.66 2880.85 31675.73 00:31:38.566 { 00:31:38.566 "results": [ 00:31:38.566 { 00:31:38.566 "job": "Nvme0n1", 00:31:38.566 "core_mask": "0x2", 00:31:38.566 "workload": "randwrite", 00:31:38.566 "status": "finished", 00:31:38.566 "queue_depth": 128, 00:31:38.566 "io_size": 4096, 00:31:38.567 "runtime": 10.002331, 00:31:38.567 "iops": 22488.357963758648, 00:31:38.567 "mibps": 87.84514829593222, 00:31:38.567 "io_failed": 0, 00:31:38.567 "io_timeout": 0, 00:31:38.567 "avg_latency_us": 5688.664222593687, 00:31:38.567 "min_latency_us": 2880.8533333333335, 00:31:38.567 "max_latency_us": 31675.733333333334 00:31:38.567 } 00:31:38.567 ], 00:31:38.567 "core_count": 1 00:31:38.567 } 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1454854 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1454854 ']' 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1454854 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1454854 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1454854' 00:31:38.567 killing process with pid 1454854 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1454854 00:31:38.567 Received shutdown signal, test time was about 10.000000 seconds 00:31:38.567 00:31:38.567 Latency(us) 00:31:38.567 [2024-10-08T16:47:32.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.567 [2024-10-08T16:47:32.624Z] =================================================================================================================== 00:31:38.567 [2024-10-08T16:47:32.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1454854 00:31:38.567 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:38.827 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:38.827 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:38.827 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:39.088 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:39.088 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:39.088 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:39.349 [2024-10-08 18:47:33.155704] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:39.349 request: 00:31:39.349 { 00:31:39.349 "uuid": "f5a9de8b-cb40-4edd-8c07-a9061e4c031e", 00:31:39.349 "method": "bdev_lvol_get_lvstores", 00:31:39.349 "req_id": 1 00:31:39.349 } 00:31:39.349 Got JSON-RPC error response 00:31:39.349 response: 00:31:39.349 { 00:31:39.349 "code": -19, 00:31:39.349 "message": "No such device" 00:31:39.349 } 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:39.349 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:39.610 aio_bdev 00:31:39.610 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6ac74c94-9943-4295-a805-59d7f839e469 00:31:39.610 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=6ac74c94-9943-4295-a805-59d7f839e469 00:31:39.610 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:39.610 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:31:39.610 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:39.610 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:39.610 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:39.871 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6ac74c94-9943-4295-a805-59d7f839e469 -t 2000 00:31:39.871 [ 00:31:39.871 { 00:31:39.871 "name": "6ac74c94-9943-4295-a805-59d7f839e469", 00:31:39.871 "aliases": [ 00:31:39.871 "lvs/lvol" 00:31:39.871 ], 00:31:39.871 "product_name": "Logical Volume", 00:31:39.871 "block_size": 4096, 00:31:39.871 "num_blocks": 38912, 00:31:39.871 "uuid": "6ac74c94-9943-4295-a805-59d7f839e469", 00:31:39.871 "assigned_rate_limits": { 00:31:39.871 "rw_ios_per_sec": 0, 00:31:39.871 "rw_mbytes_per_sec": 0, 00:31:39.871 "r_mbytes_per_sec": 0, 00:31:39.871 "w_mbytes_per_sec": 0 00:31:39.871 }, 00:31:39.871 "claimed": false, 00:31:39.871 "zoned": false, 00:31:39.871 "supported_io_types": { 00:31:39.871 "read": true, 00:31:39.871 "write": true, 00:31:39.871 "unmap": true, 00:31:39.871 "flush": false, 00:31:39.871 "reset": true, 00:31:39.871 "nvme_admin": false, 00:31:39.871 "nvme_io": false, 00:31:39.871 "nvme_io_md": false, 00:31:39.871 "write_zeroes": true, 00:31:39.871 "zcopy": false, 00:31:39.871 "get_zone_info": false, 00:31:39.871 "zone_management": false, 00:31:39.871 "zone_append": false, 00:31:39.871 "compare": false, 00:31:39.871 "compare_and_write": false, 00:31:39.871 "abort": false, 00:31:39.871 "seek_hole": true, 00:31:39.871 "seek_data": true, 00:31:39.871 "copy": false, 00:31:39.871 "nvme_iov_md": false 00:31:39.871 }, 00:31:39.871 "driver_specific": { 00:31:39.871 "lvol": { 00:31:39.871 "lvol_store_uuid": "f5a9de8b-cb40-4edd-8c07-a9061e4c031e", 00:31:39.871 "base_bdev": "aio_bdev", 00:31:39.871 "thin_provision": false, 00:31:39.871 "num_allocated_clusters": 38, 00:31:39.871 "snapshot": false, 00:31:39.871 "clone": false, 00:31:39.871 "esnap_clone": false 00:31:39.871 } 00:31:39.871 } 00:31:39.871 } 00:31:39.871 ] 00:31:39.871 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:31:39.871 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:39.871 18:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:40.131 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:40.131 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:40.131 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:40.392 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:40.392 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6ac74c94-9943-4295-a805-59d7f839e469 00:31:40.392 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f5a9de8b-cb40-4edd-8c07-a9061e4c031e 00:31:40.652 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:40.913 00:31:40.913 real 0m15.959s 00:31:40.913 user 0m15.613s 00:31:40.913 sys 0m1.476s 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:40.913 ************************************ 00:31:40.913 END TEST lvs_grow_clean 00:31:40.913 ************************************ 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:40.913 ************************************ 00:31:40.913 START TEST lvs_grow_dirty 00:31:40.913 ************************************ 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:40.913 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:41.174 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:41.174 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:41.434 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:41.434 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:41.434 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:41.434 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:41.434 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:41.434 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 lvol 150 00:31:41.694 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7f7c9808-af79-483a-89cc-9cd39046f949 00:31:41.694 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:41.694 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:41.954 [2024-10-08 18:47:35.767613] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:41.954 [2024-10-08 18:47:35.767761] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:41.954 true 00:31:41.954 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:41.954 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:41.954 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:41.954 18:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:42.215 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f7c9808-af79-483a-89cc-9cd39046f949 00:31:42.475 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.475 [2024-10-08 18:47:36.416170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.475 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:42.735 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1457912 00:31:42.735 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:42.735 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1457912 /var/tmp/bdevperf.sock 00:31:42.735 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:42.735 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1457912 ']' 00:31:42.735 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:42.735 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:42.735 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:42.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:42.735 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:42.735 18:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:42.735 [2024-10-08 18:47:36.656544] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:31:42.735 [2024-10-08 18:47:36.656607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457912 ] 00:31:42.735 [2024-10-08 18:47:36.734405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.735 [2024-10-08 18:47:36.788484] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.675 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:43.675 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:43.675 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:43.935 Nvme0n1 00:31:43.935 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:44.196 [ 00:31:44.196 { 00:31:44.196 "name": "Nvme0n1", 00:31:44.196 "aliases": [ 00:31:44.196 "7f7c9808-af79-483a-89cc-9cd39046f949" 00:31:44.196 ], 00:31:44.196 "product_name": "NVMe disk", 00:31:44.196 "block_size": 4096, 00:31:44.196 "num_blocks": 38912, 00:31:44.196 "uuid": "7f7c9808-af79-483a-89cc-9cd39046f949", 00:31:44.196 "numa_id": 0, 00:31:44.196 "assigned_rate_limits": { 00:31:44.196 "rw_ios_per_sec": 0, 00:31:44.196 "rw_mbytes_per_sec": 0, 00:31:44.196 "r_mbytes_per_sec": 0, 00:31:44.196 "w_mbytes_per_sec": 0 00:31:44.196 }, 00:31:44.196 "claimed": false, 00:31:44.196 "zoned": false, 00:31:44.196 "supported_io_types": { 00:31:44.196 "read": true, 00:31:44.196 "write": true, 00:31:44.196 "unmap": true, 00:31:44.196 "flush": true, 00:31:44.196 "reset": true, 00:31:44.196 "nvme_admin": true, 00:31:44.196 "nvme_io": true, 00:31:44.196 "nvme_io_md": false, 00:31:44.196 "write_zeroes": true, 00:31:44.196 "zcopy": false, 00:31:44.196 "get_zone_info": false, 00:31:44.196 "zone_management": false, 00:31:44.196 "zone_append": false, 00:31:44.196 "compare": true, 00:31:44.196 "compare_and_write": true, 00:31:44.196 "abort": true, 00:31:44.196 "seek_hole": false, 00:31:44.196 "seek_data": false, 00:31:44.196 "copy": true, 00:31:44.196 "nvme_iov_md": false 00:31:44.196 }, 00:31:44.196 "memory_domains": [ 00:31:44.196 { 00:31:44.196 "dma_device_id": "system", 00:31:44.196 "dma_device_type": 1 00:31:44.196 } 00:31:44.196 ], 00:31:44.196 "driver_specific": { 00:31:44.196 "nvme": [ 00:31:44.196 { 00:31:44.196 "trid": { 00:31:44.196 "trtype": "TCP", 00:31:44.196 "adrfam": "IPv4", 00:31:44.196 "traddr": "10.0.0.2", 00:31:44.196 "trsvcid": "4420", 00:31:44.196 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:44.196 }, 00:31:44.196 "ctrlr_data": { 00:31:44.196 "cntlid": 1, 00:31:44.196 "vendor_id": "0x8086", 00:31:44.196 "model_number": "SPDK bdev Controller", 00:31:44.196 "serial_number": "SPDK0", 00:31:44.196 "firmware_revision": "25.01", 00:31:44.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:44.196 "oacs": { 00:31:44.196 "security": 0, 00:31:44.196 "format": 0, 00:31:44.196 "firmware": 0, 00:31:44.196 "ns_manage": 0 00:31:44.196 }, 00:31:44.196 "multi_ctrlr": true, 00:31:44.196 "ana_reporting": false 00:31:44.196 }, 00:31:44.196 "vs": { 00:31:44.196 "nvme_version": "1.3" 00:31:44.196 }, 00:31:44.196 "ns_data": { 00:31:44.196 "id": 1, 00:31:44.196 "can_share": true 00:31:44.196 } 00:31:44.196 } 00:31:44.196 ], 00:31:44.196 "mp_policy": "active_passive" 00:31:44.196 } 00:31:44.196 } 00:31:44.196 ] 00:31:44.196 18:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1458091 00:31:44.196 18:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:44.196 18:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:44.196 Running I/O for 10 seconds... 00:31:45.134 Latency(us) 00:31:45.134 [2024-10-08T16:47:39.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:45.134 Nvme0n1 : 1.00 17784.00 69.47 0.00 0.00 0.00 0.00 0.00 00:31:45.134 [2024-10-08T16:47:39.191Z] =================================================================================================================== 00:31:45.134 [2024-10-08T16:47:39.191Z] Total : 17784.00 69.47 0.00 0.00 0.00 0.00 0.00 00:31:45.134 00:31:46.076 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:46.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:46.076 Nvme0n1 : 2.00 17845.50 69.71 0.00 0.00 0.00 0.00 0.00 00:31:46.076 [2024-10-08T16:47:40.133Z] =================================================================================================================== 00:31:46.076 [2024-10-08T16:47:40.133Z] Total : 17845.50 69.71 0.00 0.00 0.00 0.00 0.00 00:31:46.076 00:31:46.336 true 00:31:46.336 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:46.336 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:46.597 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:46.597 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:46.597 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1458091 00:31:47.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:47.167 Nvme0n1 : 3.00 17908.33 69.95 0.00 0.00 0.00 0.00 0.00 00:31:47.167 [2024-10-08T16:47:41.224Z] =================================================================================================================== 00:31:47.167 [2024-10-08T16:47:41.224Z] Total : 17908.33 69.95 0.00 0.00 0.00 0.00 0.00 00:31:47.167 00:31:48.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.109 Nvme0n1 : 4.00 18447.75 72.06 0.00 0.00 0.00 0.00 0.00 00:31:48.109 [2024-10-08T16:47:42.166Z] =================================================================================================================== 00:31:48.109 [2024-10-08T16:47:42.166Z] Total : 18447.75 72.06 0.00 0.00 0.00 0.00 0.00 00:31:48.109 00:31:49.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.494 Nvme0n1 : 5.00 19838.20 77.49 0.00 0.00 0.00 0.00 0.00 00:31:49.494 [2024-10-08T16:47:43.551Z] =================================================================================================================== 00:31:49.494 [2024-10-08T16:47:43.551Z] Total : 19838.20 77.49 0.00 0.00 0.00 0.00 0.00 00:31:49.494 00:31:50.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.064 Nvme0n1 : 6.00 20765.33 81.11 0.00 0.00 0.00 0.00 0.00 00:31:50.064 [2024-10-08T16:47:44.121Z] =================================================================================================================== 00:31:50.064 [2024-10-08T16:47:44.121Z] Total : 20765.33 81.11 0.00 0.00 0.00 0.00 0.00 00:31:50.064 00:31:51.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.445 Nvme0n1 : 7.00 21436.57 83.74 0.00 0.00 0.00 0.00 0.00 00:31:51.445 [2024-10-08T16:47:45.502Z] =================================================================================================================== 00:31:51.445 [2024-10-08T16:47:45.502Z] Total : 21436.57 83.74 0.00 0.00 0.00 0.00 0.00 00:31:51.445 00:31:52.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.384 Nvme0n1 : 8.00 21946.00 85.73 0.00 0.00 0.00 0.00 0.00 00:31:52.384 [2024-10-08T16:47:46.441Z] =================================================================================================================== 00:31:52.384 [2024-10-08T16:47:46.441Z] Total : 21946.00 85.73 0.00 0.00 0.00 0.00 0.00 00:31:52.384 00:31:53.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.324 Nvme0n1 : 9.00 22336.89 87.25 0.00 0.00 0.00 0.00 0.00 00:31:53.324 [2024-10-08T16:47:47.381Z] =================================================================================================================== 00:31:53.324 [2024-10-08T16:47:47.381Z] Total : 22336.89 87.25 0.00 0.00 0.00 0.00 0.00 00:31:53.324 00:31:54.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.263 Nvme0n1 : 10.00 22654.40 88.49 0.00 0.00 0.00 0.00 0.00 00:31:54.263 [2024-10-08T16:47:48.320Z] =================================================================================================================== 00:31:54.263 [2024-10-08T16:47:48.320Z] Total : 22654.40 88.49 0.00 0.00 0.00 0.00 0.00 00:31:54.263 00:31:54.263 00:31:54.263 Latency(us) 00:31:54.263 [2024-10-08T16:47:48.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.263 Nvme0n1 : 10.00 22660.44 88.52 0.00 0.00 5645.73 1604.27 13052.59 00:31:54.263 [2024-10-08T16:47:48.320Z] =================================================================================================================== 00:31:54.263 [2024-10-08T16:47:48.320Z] Total : 22660.44 88.52 0.00 0.00 5645.73 1604.27 13052.59 00:31:54.263 { 00:31:54.263 "results": [ 00:31:54.263 { 00:31:54.263 "job": "Nvme0n1", 00:31:54.263 "core_mask": "0x2", 00:31:54.263 "workload": "randwrite", 00:31:54.263 "status": "finished", 00:31:54.263 "queue_depth": 128, 00:31:54.263 "io_size": 4096, 00:31:54.263 "runtime": 10.002982, 00:31:54.263 "iops": 22660.44265599998, 00:31:54.263 "mibps": 88.51735412499993, 00:31:54.263 "io_failed": 0, 00:31:54.263 "io_timeout": 0, 00:31:54.263 "avg_latency_us": 5645.729484012141, 00:31:54.263 "min_latency_us": 1604.2666666666667, 00:31:54.263 "max_latency_us": 13052.586666666666 00:31:54.263 } 00:31:54.263 ], 00:31:54.263 "core_count": 1 00:31:54.263 } 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1457912 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1457912 ']' 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1457912 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1457912 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1457912' 00:31:54.263 killing process with pid 1457912 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1457912 00:31:54.263 Received shutdown signal, test time was about 10.000000 seconds 00:31:54.263 00:31:54.263 Latency(us) 00:31:54.263 [2024-10-08T16:47:48.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.263 [2024-10-08T16:47:48.320Z] =================================================================================================================== 00:31:54.263 [2024-10-08T16:47:48.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:54.263 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1457912 00:31:54.522 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:54.522 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:54.782 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:54.782 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1454150 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1454150 00:31:55.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1454150 Killed "${NVMF_APP[@]}" "$@" 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1460239 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1460239 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1460239 ']' 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:55.043 18:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:55.043 [2024-10-08 18:47:49.005072] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.043 [2024-10-08 18:47:49.006145] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:31:55.043 [2024-10-08 18:47:49.006193] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.043 [2024-10-08 18:47:49.096653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.303 [2024-10-08 18:47:49.153175] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.303 [2024-10-08 18:47:49.153209] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.303 [2024-10-08 18:47:49.153215] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.303 [2024-10-08 18:47:49.153220] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.303 [2024-10-08 18:47:49.153224] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.303 [2024-10-08 18:47:49.153702] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.303 [2024-10-08 18:47:49.204324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.303 [2024-10-08 18:47:49.204515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.874 18:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:55.874 18:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:55.874 18:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:55.874 18:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:55.874 18:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:55.874 18:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.874 18:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:56.135 [2024-10-08 18:47:50.036147] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:56.135 [2024-10-08 18:47:50.036401] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:56.135 [2024-10-08 18:47:50.036492] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:56.135 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:56.135 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7f7c9808-af79-483a-89cc-9cd39046f949 00:31:56.135 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=7f7c9808-af79-483a-89cc-9cd39046f949 00:31:56.135 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:56.135 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:56.135 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:56.135 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:56.135 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:56.397 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7f7c9808-af79-483a-89cc-9cd39046f949 -t 2000 00:31:56.397 [ 00:31:56.397 { 00:31:56.397 "name": "7f7c9808-af79-483a-89cc-9cd39046f949", 00:31:56.397 "aliases": [ 00:31:56.397 "lvs/lvol" 00:31:56.397 ], 00:31:56.397 "product_name": "Logical Volume", 00:31:56.397 "block_size": 4096, 00:31:56.397 "num_blocks": 38912, 00:31:56.397 "uuid": "7f7c9808-af79-483a-89cc-9cd39046f949", 00:31:56.397 "assigned_rate_limits": { 00:31:56.397 "rw_ios_per_sec": 0, 00:31:56.397 "rw_mbytes_per_sec": 0, 00:31:56.397 "r_mbytes_per_sec": 0, 00:31:56.397 "w_mbytes_per_sec": 0 00:31:56.397 }, 00:31:56.397 "claimed": false, 00:31:56.397 "zoned": false, 00:31:56.397 "supported_io_types": { 00:31:56.397 "read": true, 00:31:56.397 "write": true, 00:31:56.397 "unmap": true, 00:31:56.397 "flush": false, 00:31:56.397 "reset": true, 00:31:56.397 "nvme_admin": false, 00:31:56.397 "nvme_io": false, 00:31:56.397 "nvme_io_md": false, 00:31:56.397 "write_zeroes": true, 00:31:56.397 "zcopy": false, 00:31:56.397 "get_zone_info": false, 00:31:56.397 "zone_management": false, 00:31:56.397 "zone_append": false, 00:31:56.397 "compare": false, 00:31:56.397 "compare_and_write": false, 00:31:56.397 "abort": false, 00:31:56.397 "seek_hole": true, 00:31:56.397 "seek_data": true, 00:31:56.397 "copy": false, 00:31:56.397 "nvme_iov_md": false 00:31:56.397 }, 00:31:56.397 "driver_specific": { 00:31:56.397 "lvol": { 00:31:56.397 "lvol_store_uuid": "8df6adc0-27cb-4477-91bb-d0baa2052a81", 00:31:56.397 "base_bdev": "aio_bdev", 00:31:56.397 "thin_provision": false, 00:31:56.397 "num_allocated_clusters": 38, 00:31:56.397 "snapshot": false, 00:31:56.397 "clone": false, 00:31:56.397 "esnap_clone": false 00:31:56.397 } 00:31:56.397 } 00:31:56.397 } 00:31:56.397 ] 00:31:56.658 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:56.658 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:56.658 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:56.658 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:56.658 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:56.658 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:56.920 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:56.920 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:56.920 [2024-10-08 18:47:50.962264] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:57.181 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:57.181 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:31:57.181 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:57.181 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:57.181 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.181 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:57.181 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.181 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:57.181 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.181 18:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:57.181 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:57.181 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:57.181 request: 00:31:57.181 { 00:31:57.181 "uuid": "8df6adc0-27cb-4477-91bb-d0baa2052a81", 00:31:57.181 "method": "bdev_lvol_get_lvstores", 00:31:57.181 "req_id": 1 00:31:57.181 } 00:31:57.181 Got JSON-RPC error response 00:31:57.181 response: 00:31:57.181 { 00:31:57.181 "code": -19, 00:31:57.181 "message": "No such device" 00:31:57.181 } 00:31:57.182 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:31:57.182 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.182 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.182 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.182 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:57.442 aio_bdev 00:31:57.442 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7f7c9808-af79-483a-89cc-9cd39046f949 00:31:57.442 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=7f7c9808-af79-483a-89cc-9cd39046f949 00:31:57.442 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:57.442 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:57.442 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:57.442 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:57.443 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:57.703 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7f7c9808-af79-483a-89cc-9cd39046f949 -t 2000 00:31:57.703 [ 00:31:57.703 { 00:31:57.703 "name": "7f7c9808-af79-483a-89cc-9cd39046f949", 00:31:57.703 "aliases": [ 00:31:57.703 "lvs/lvol" 00:31:57.703 ], 00:31:57.703 "product_name": "Logical Volume", 00:31:57.703 "block_size": 4096, 00:31:57.703 "num_blocks": 38912, 00:31:57.703 "uuid": "7f7c9808-af79-483a-89cc-9cd39046f949", 00:31:57.703 "assigned_rate_limits": { 00:31:57.703 "rw_ios_per_sec": 0, 00:31:57.703 "rw_mbytes_per_sec": 0, 00:31:57.703 "r_mbytes_per_sec": 0, 00:31:57.703 "w_mbytes_per_sec": 0 00:31:57.703 }, 00:31:57.703 "claimed": false, 00:31:57.703 "zoned": false, 00:31:57.703 "supported_io_types": { 00:31:57.703 "read": true, 00:31:57.703 "write": true, 00:31:57.703 "unmap": true, 00:31:57.703 "flush": false, 00:31:57.703 "reset": true, 00:31:57.703 "nvme_admin": false, 00:31:57.703 "nvme_io": false, 00:31:57.703 "nvme_io_md": false, 00:31:57.703 "write_zeroes": true, 00:31:57.703 "zcopy": false, 00:31:57.703 "get_zone_info": false, 00:31:57.703 "zone_management": false, 00:31:57.703 "zone_append": false, 00:31:57.703 "compare": false, 00:31:57.703 "compare_and_write": false, 00:31:57.703 "abort": false, 00:31:57.703 "seek_hole": true, 00:31:57.703 "seek_data": true, 00:31:57.703 "copy": false, 00:31:57.703 "nvme_iov_md": false 00:31:57.703 }, 00:31:57.703 "driver_specific": { 00:31:57.703 "lvol": { 00:31:57.703 "lvol_store_uuid": "8df6adc0-27cb-4477-91bb-d0baa2052a81", 00:31:57.703 "base_bdev": "aio_bdev", 00:31:57.703 "thin_provision": false, 00:31:57.703 "num_allocated_clusters": 38, 00:31:57.703 "snapshot": false, 00:31:57.703 "clone": false, 00:31:57.703 "esnap_clone": false 00:31:57.703 } 00:31:57.703 } 00:31:57.703 } 00:31:57.703 ] 00:31:57.703 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:57.703 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:57.703 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:57.964 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:57.964 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:57.964 18:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:58.225 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:58.225 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7f7c9808-af79-483a-89cc-9cd39046f949 00:31:58.225 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8df6adc0-27cb-4477-91bb-d0baa2052a81 00:31:58.486 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:58.747 00:31:58.747 real 0m17.759s 00:31:58.747 user 0m35.782s 00:31:58.747 sys 0m3.056s 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:58.747 ************************************ 00:31:58.747 END TEST lvs_grow_dirty 00:31:58.747 ************************************ 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:58.747 nvmf_trace.0 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.747 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.747 rmmod nvme_tcp 00:31:58.747 rmmod nvme_fabrics 00:31:58.747 rmmod nvme_keyring 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1460239 ']' 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1460239 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1460239 ']' 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1460239 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460239 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460239' 00:31:59.009 killing process with pid 1460239 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1460239 00:31:59.009 18:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1460239 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.009 18:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:01.651 00:32:01.651 real 0m45.352s 00:32:01.651 user 0m54.282s 00:32:01.651 sys 0m11.027s 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:01.651 ************************************ 00:32:01.651 END TEST nvmf_lvs_grow 00:32:01.651 ************************************ 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:01.651 ************************************ 00:32:01.651 START TEST nvmf_bdev_io_wait 00:32:01.651 ************************************ 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:01.651 * Looking for test storage... 00:32:01.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:01.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.651 --rc genhtml_branch_coverage=1 00:32:01.651 --rc genhtml_function_coverage=1 00:32:01.651 --rc genhtml_legend=1 00:32:01.651 --rc geninfo_all_blocks=1 00:32:01.651 --rc geninfo_unexecuted_blocks=1 00:32:01.651 00:32:01.651 ' 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:01.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.651 --rc genhtml_branch_coverage=1 00:32:01.651 --rc genhtml_function_coverage=1 00:32:01.651 --rc genhtml_legend=1 00:32:01.651 --rc geninfo_all_blocks=1 00:32:01.651 --rc geninfo_unexecuted_blocks=1 00:32:01.651 00:32:01.651 ' 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:01.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.651 --rc genhtml_branch_coverage=1 00:32:01.651 --rc genhtml_function_coverage=1 00:32:01.651 --rc genhtml_legend=1 00:32:01.651 --rc geninfo_all_blocks=1 00:32:01.651 --rc geninfo_unexecuted_blocks=1 00:32:01.651 00:32:01.651 ' 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:01.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.651 --rc genhtml_branch_coverage=1 00:32:01.651 --rc genhtml_function_coverage=1 00:32:01.651 --rc genhtml_legend=1 00:32:01.651 --rc geninfo_all_blocks=1 00:32:01.651 --rc geninfo_unexecuted_blocks=1 00:32:01.651 00:32:01.651 ' 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.651 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.652 18:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:09.793 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.793 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.793 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.793 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.793 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.793 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.793 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.793 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:09.794 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:09.794 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:09.794 Found net devices under 0000:31:00.0: cvl_0_0 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:09.794 Found net devices under 0000:31:00.1: cvl_0_1 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:09.794 18:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.794 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.794 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.794 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:09.794 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:09.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:32:09.794 00:32:09.794 --- 10.0.0.2 ping statistics --- 00:32:09.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.794 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:32:09.794 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:32:09.794 00:32:09.794 --- 10.0.0.1 ping statistics --- 00:32:09.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.794 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:32:09.794 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.794 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:32:09.794 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:09.794 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1465311 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1465311 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1465311 ']' 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:09.795 18:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:09.795 [2024-10-08 18:48:03.254736] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:09.795 [2024-10-08 18:48:03.255862] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:32:09.795 [2024-10-08 18:48:03.255912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.795 [2024-10-08 18:48:03.348983] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.795 [2024-10-08 18:48:03.446751] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.795 [2024-10-08 18:48:03.446819] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.795 [2024-10-08 18:48:03.446827] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.795 [2024-10-08 18:48:03.446835] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.795 [2024-10-08 18:48:03.446841] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.795 [2024-10-08 18:48:03.448964] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.795 [2024-10-08 18:48:03.449126] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.795 [2024-10-08 18:48:03.449207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.795 [2024-10-08 18:48:03.449207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.795 [2024-10-08 18:48:03.449823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:10.056 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:10.056 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:32:10.056 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:10.056 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:10.056 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.318 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.318 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.319 [2024-10-08 18:48:04.193407] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:10.319 [2024-10-08 18:48:04.194093] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:10.319 [2024-10-08 18:48:04.194290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:10.319 [2024-10-08 18:48:04.194447] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.319 [2024-10-08 18:48:04.206044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.319 Malloc0 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:10.319 [2024-10-08 18:48:04.294709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1465545 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1465547 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:10.319 { 00:32:10.319 "params": { 00:32:10.319 "name": "Nvme$subsystem", 00:32:10.319 "trtype": "$TEST_TRANSPORT", 00:32:10.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.319 "adrfam": "ipv4", 00:32:10.319 "trsvcid": "$NVMF_PORT", 00:32:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.319 "hdgst": ${hdgst:-false}, 00:32:10.319 "ddgst": ${ddgst:-false} 00:32:10.319 }, 00:32:10.319 "method": "bdev_nvme_attach_controller" 00:32:10.319 } 00:32:10.319 EOF 00:32:10.319 )") 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1465549 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:10.319 { 00:32:10.319 "params": { 00:32:10.319 "name": "Nvme$subsystem", 00:32:10.319 "trtype": "$TEST_TRANSPORT", 00:32:10.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.319 "adrfam": "ipv4", 00:32:10.319 "trsvcid": "$NVMF_PORT", 00:32:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.319 "hdgst": ${hdgst:-false}, 00:32:10.319 "ddgst": ${ddgst:-false} 00:32:10.319 }, 00:32:10.319 "method": "bdev_nvme_attach_controller" 00:32:10.319 } 00:32:10.319 EOF 00:32:10.319 )") 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1465552 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:10.319 { 00:32:10.319 "params": { 00:32:10.319 "name": "Nvme$subsystem", 00:32:10.319 "trtype": "$TEST_TRANSPORT", 00:32:10.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.319 "adrfam": "ipv4", 00:32:10.319 "trsvcid": "$NVMF_PORT", 00:32:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.319 "hdgst": ${hdgst:-false}, 00:32:10.319 "ddgst": ${ddgst:-false} 00:32:10.319 }, 00:32:10.319 "method": "bdev_nvme_attach_controller" 00:32:10.319 } 00:32:10.319 EOF 00:32:10.319 )") 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:10.319 { 00:32:10.319 "params": { 00:32:10.319 "name": "Nvme$subsystem", 00:32:10.319 "trtype": "$TEST_TRANSPORT", 00:32:10.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.319 "adrfam": "ipv4", 00:32:10.319 "trsvcid": "$NVMF_PORT", 00:32:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.319 "hdgst": ${hdgst:-false}, 00:32:10.319 "ddgst": ${ddgst:-false} 00:32:10.319 }, 00:32:10.319 "method": "bdev_nvme_attach_controller" 00:32:10.319 } 00:32:10.319 EOF 00:32:10.319 )") 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1465545 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:10.319 "params": { 00:32:10.319 "name": "Nvme1", 00:32:10.319 "trtype": "tcp", 00:32:10.319 "traddr": "10.0.0.2", 00:32:10.319 "adrfam": "ipv4", 00:32:10.319 "trsvcid": "4420", 00:32:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:10.319 "hdgst": false, 00:32:10.319 "ddgst": false 00:32:10.319 }, 00:32:10.319 "method": "bdev_nvme_attach_controller" 00:32:10.319 }' 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:10.319 "params": { 00:32:10.319 "name": "Nvme1", 00:32:10.319 "trtype": "tcp", 00:32:10.319 "traddr": "10.0.0.2", 00:32:10.319 "adrfam": "ipv4", 00:32:10.319 "trsvcid": "4420", 00:32:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:10.319 "hdgst": false, 00:32:10.319 "ddgst": false 00:32:10.319 }, 00:32:10.319 "method": "bdev_nvme_attach_controller" 00:32:10.319 }' 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:10.319 "params": { 00:32:10.319 "name": "Nvme1", 00:32:10.319 "trtype": "tcp", 00:32:10.319 "traddr": "10.0.0.2", 00:32:10.319 "adrfam": "ipv4", 00:32:10.319 "trsvcid": "4420", 00:32:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:10.319 "hdgst": false, 00:32:10.319 "ddgst": false 00:32:10.319 }, 00:32:10.319 "method": "bdev_nvme_attach_controller" 00:32:10.319 }' 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:10.319 18:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:10.319 "params": { 00:32:10.319 "name": "Nvme1", 00:32:10.319 "trtype": "tcp", 00:32:10.319 "traddr": "10.0.0.2", 00:32:10.319 "adrfam": "ipv4", 00:32:10.319 "trsvcid": "4420", 00:32:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:10.319 "hdgst": false, 00:32:10.319 "ddgst": false 00:32:10.319 }, 00:32:10.319 "method": "bdev_nvme_attach_controller" 00:32:10.319 }' 00:32:10.319 [2024-10-08 18:48:04.352899] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:32:10.319 [2024-10-08 18:48:04.352899] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:32:10.319 [2024-10-08 18:48:04.352984] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 18:48:04.352985] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:10.319 --proc-type=auto ] 00:32:10.319 [2024-10-08 18:48:04.357820] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:32:10.319 [2024-10-08 18:48:04.357888] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:10.319 [2024-10-08 18:48:04.359037] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:32:10.319 [2024-10-08 18:48:04.359097] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:10.581 [2024-10-08 18:48:04.571296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.843 [2024-10-08 18:48:04.639688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:32:10.843 [2024-10-08 18:48:04.664061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.843 [2024-10-08 18:48:04.736203] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:32:10.843 [2024-10-08 18:48:04.757007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.843 [2024-10-08 18:48:04.827779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.843 [2024-10-08 18:48:04.833404] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:32:10.843 [2024-10-08 18:48:04.895780] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:32:11.105 Running I/O for 1 seconds... 00:32:11.105 Running I/O for 1 seconds... 00:32:11.366 Running I/O for 1 seconds... 00:32:11.366 Running I/O for 1 seconds... 00:32:12.310 11177.00 IOPS, 43.66 MiB/s 00:32:12.310 Latency(us) 00:32:12.310 [2024-10-08T16:48:06.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.310 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:12.310 Nvme1n1 : 1.01 11237.73 43.90 0.00 0.00 11348.29 2389.33 13489.49 00:32:12.310 [2024-10-08T16:48:06.367Z] =================================================================================================================== 00:32:12.310 [2024-10-08T16:48:06.367Z] Total : 11237.73 43.90 0.00 0.00 11348.29 2389.33 13489.49 00:32:12.310 9756.00 IOPS, 38.11 MiB/s 00:32:12.310 Latency(us) 00:32:12.310 [2024-10-08T16:48:06.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.310 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:12.310 Nvme1n1 : 1.01 9808.94 38.32 0.00 0.00 12996.33 5816.32 16274.77 00:32:12.310 [2024-10-08T16:48:06.367Z] =================================================================================================================== 00:32:12.310 [2024-10-08T16:48:06.367Z] Total : 9808.94 38.32 0.00 0.00 12996.33 5816.32 16274.77 00:32:12.310 11597.00 IOPS, 45.30 MiB/s 00:32:12.310 Latency(us) 00:32:12.310 [2024-10-08T16:48:06.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.310 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:12.310 Nvme1n1 : 1.01 11690.29 45.67 0.00 0.00 10917.96 3932.16 18350.08 00:32:12.310 [2024-10-08T16:48:06.367Z] =================================================================================================================== 00:32:12.310 [2024-10-08T16:48:06.367Z] Total : 11690.29 45.67 0.00 0.00 10917.96 3932.16 18350.08 00:32:12.310 188176.00 IOPS, 735.06 MiB/s 00:32:12.310 Latency(us) 00:32:12.310 [2024-10-08T16:48:06.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.310 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:12.310 Nvme1n1 : 1.00 187798.45 733.59 0.00 0.00 677.79 310.61 1993.39 00:32:12.310 [2024-10-08T16:48:06.367Z] =================================================================================================================== 00:32:12.310 [2024-10-08T16:48:06.367Z] Total : 187798.45 733.59 0.00 0.00 677.79 310.61 1993.39 00:32:12.310 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1465547 00:32:12.310 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1465549 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1465552 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:12.572 rmmod nvme_tcp 00:32:12.572 rmmod nvme_fabrics 00:32:12.572 rmmod nvme_keyring 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1465311 ']' 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1465311 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1465311 ']' 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1465311 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.572 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465311 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465311' 00:32:12.834 killing process with pid 1465311 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1465311 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1465311 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.834 18:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.381 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:15.381 00:32:15.381 real 0m13.673s 00:32:15.381 user 0m17.400s 00:32:15.381 sys 0m8.230s 00:32:15.381 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:15.381 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:15.381 ************************************ 00:32:15.381 END TEST nvmf_bdev_io_wait 00:32:15.381 ************************************ 00:32:15.381 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:15.381 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:15.381 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:15.381 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:15.381 ************************************ 00:32:15.381 START TEST nvmf_queue_depth 00:32:15.381 ************************************ 00:32:15.381 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:15.381 * Looking for test storage... 00:32:15.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.381 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:15.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.382 --rc genhtml_branch_coverage=1 00:32:15.382 --rc genhtml_function_coverage=1 00:32:15.382 --rc genhtml_legend=1 00:32:15.382 --rc geninfo_all_blocks=1 00:32:15.382 --rc geninfo_unexecuted_blocks=1 00:32:15.382 00:32:15.382 ' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:15.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.382 --rc genhtml_branch_coverage=1 00:32:15.382 --rc genhtml_function_coverage=1 00:32:15.382 --rc genhtml_legend=1 00:32:15.382 --rc geninfo_all_blocks=1 00:32:15.382 --rc geninfo_unexecuted_blocks=1 00:32:15.382 00:32:15.382 ' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:15.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.382 --rc genhtml_branch_coverage=1 00:32:15.382 --rc genhtml_function_coverage=1 00:32:15.382 --rc genhtml_legend=1 00:32:15.382 --rc geninfo_all_blocks=1 00:32:15.382 --rc geninfo_unexecuted_blocks=1 00:32:15.382 00:32:15.382 ' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:15.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.382 --rc genhtml_branch_coverage=1 00:32:15.382 --rc genhtml_function_coverage=1 00:32:15.382 --rc genhtml_legend=1 00:32:15.382 --rc geninfo_all_blocks=1 00:32:15.382 --rc geninfo_unexecuted_blocks=1 00:32:15.382 00:32:15.382 ' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:15.382 18:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:23.525 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:23.525 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.525 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:23.526 Found net devices under 0000:31:00.0: cvl_0_0 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:23.526 Found net devices under 0000:31:00.1: cvl_0_1 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:23.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:32:23.526 00:32:23.526 --- 10.0.0.2 ping statistics --- 00:32:23.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.526 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:32:23.526 00:32:23.526 --- 10.0.0.1 ping statistics --- 00:32:23.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.526 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1470747 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1470747 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1470747 ']' 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:23.526 18:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:23.526 [2024-10-08 18:48:17.017213] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:23.526 [2024-10-08 18:48:17.018379] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:32:23.526 [2024-10-08 18:48:17.018431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.526 [2024-10-08 18:48:17.112225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.526 [2024-10-08 18:48:17.205087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.526 [2024-10-08 18:48:17.205147] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.526 [2024-10-08 18:48:17.205156] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.526 [2024-10-08 18:48:17.205164] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.526 [2024-10-08 18:48:17.205170] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.526 [2024-10-08 18:48:17.205965] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.526 [2024-10-08 18:48:17.281040] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:23.526 [2024-10-08 18:48:17.281326] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:23.788 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.788 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:23.788 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:23.788 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:23.788 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.050 [2024-10-08 18:48:17.886825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.050 Malloc0 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.050 [2024-10-08 18:48:17.979041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1470906 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1470906 /var/tmp/bdevperf.sock 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1470906 ']' 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:24.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:24.050 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.050 [2024-10-08 18:48:18.037152] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:32:24.050 [2024-10-08 18:48:18.037216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470906 ] 00:32:24.050 [2024-10-08 18:48:18.104311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.311 [2024-10-08 18:48:18.202184] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.882 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:24.882 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:24.882 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.882 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.882 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.143 NVMe0n1 00:32:25.143 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.143 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:25.143 Running I/O for 10 seconds... 00:32:27.471 8192.00 IOPS, 32.00 MiB/s [2024-10-08T16:48:22.470Z] 8696.50 IOPS, 33.97 MiB/s [2024-10-08T16:48:23.411Z] 9218.33 IOPS, 36.01 MiB/s [2024-10-08T16:48:24.354Z] 10287.00 IOPS, 40.18 MiB/s [2024-10-08T16:48:25.296Z] 10931.60 IOPS, 42.70 MiB/s [2024-10-08T16:48:26.238Z] 11400.17 IOPS, 44.53 MiB/s [2024-10-08T16:48:27.181Z] 11709.86 IOPS, 45.74 MiB/s [2024-10-08T16:48:28.564Z] 11942.62 IOPS, 46.65 MiB/s [2024-10-08T16:48:29.505Z] 12142.33 IOPS, 47.43 MiB/s [2024-10-08T16:48:29.505Z] 12284.60 IOPS, 47.99 MiB/s 00:32:35.448 Latency(us) 00:32:35.448 [2024-10-08T16:48:29.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.448 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:35.448 Verification LBA range: start 0x0 length 0x4000 00:32:35.448 NVMe0n1 : 10.10 12263.84 47.91 0.00 0.00 82878.93 24576.00 75584.85 00:32:35.448 [2024-10-08T16:48:29.505Z] =================================================================================================================== 00:32:35.448 [2024-10-08T16:48:29.505Z] Total : 12263.84 47.91 0.00 0.00 82878.93 24576.00 75584.85 00:32:35.448 { 00:32:35.448 "results": [ 00:32:35.448 { 00:32:35.448 "job": "NVMe0n1", 00:32:35.448 "core_mask": "0x1", 00:32:35.448 "workload": "verify", 00:32:35.448 "status": "finished", 00:32:35.448 "verify_range": { 00:32:35.448 "start": 0, 00:32:35.448 "length": 16384 00:32:35.448 }, 00:32:35.448 "queue_depth": 1024, 00:32:35.448 "io_size": 4096, 00:32:35.448 "runtime": 10.100426, 00:32:35.448 "iops": 12263.839168763772, 00:32:35.448 "mibps": 47.905621752983485, 00:32:35.448 "io_failed": 0, 00:32:35.448 "io_timeout": 0, 00:32:35.448 "avg_latency_us": 82878.9284800732, 00:32:35.448 "min_latency_us": 24576.0, 00:32:35.448 "max_latency_us": 75584.85333333333 00:32:35.448 } 00:32:35.448 ], 00:32:35.448 "core_count": 1 00:32:35.448 } 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1470906 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1470906 ']' 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1470906 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1470906 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1470906' 00:32:35.448 killing process with pid 1470906 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1470906 00:32:35.448 Received shutdown signal, test time was about 10.000000 seconds 00:32:35.448 00:32:35.448 Latency(us) 00:32:35.448 [2024-10-08T16:48:29.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.448 [2024-10-08T16:48:29.505Z] =================================================================================================================== 00:32:35.448 [2024-10-08T16:48:29.505Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1470906 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:35.448 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:35.448 rmmod nvme_tcp 00:32:35.709 rmmod nvme_fabrics 00:32:35.709 rmmod nvme_keyring 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1470747 ']' 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1470747 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1470747 ']' 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1470747 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1470747 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1470747' 00:32:35.709 killing process with pid 1470747 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1470747 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1470747 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:35.709 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:32:35.970 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:35.970 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:35.970 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.970 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:35.970 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.880 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:37.880 00:32:37.880 real 0m22.866s 00:32:37.880 user 0m25.050s 00:32:37.880 sys 0m7.603s 00:32:37.880 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:37.880 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:37.880 ************************************ 00:32:37.880 END TEST nvmf_queue_depth 00:32:37.880 ************************************ 00:32:37.880 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:37.880 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:37.880 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:37.880 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:37.880 ************************************ 00:32:37.880 START TEST nvmf_target_multipath 00:32:37.880 ************************************ 00:32:37.880 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:38.141 * Looking for test storage... 00:32:38.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.141 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:38.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.142 --rc genhtml_branch_coverage=1 00:32:38.142 --rc genhtml_function_coverage=1 00:32:38.142 --rc genhtml_legend=1 00:32:38.142 --rc geninfo_all_blocks=1 00:32:38.142 --rc geninfo_unexecuted_blocks=1 00:32:38.142 00:32:38.142 ' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:38.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.142 --rc genhtml_branch_coverage=1 00:32:38.142 --rc genhtml_function_coverage=1 00:32:38.142 --rc genhtml_legend=1 00:32:38.142 --rc geninfo_all_blocks=1 00:32:38.142 --rc geninfo_unexecuted_blocks=1 00:32:38.142 00:32:38.142 ' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:38.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.142 --rc genhtml_branch_coverage=1 00:32:38.142 --rc genhtml_function_coverage=1 00:32:38.142 --rc genhtml_legend=1 00:32:38.142 --rc geninfo_all_blocks=1 00:32:38.142 --rc geninfo_unexecuted_blocks=1 00:32:38.142 00:32:38.142 ' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:38.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.142 --rc genhtml_branch_coverage=1 00:32:38.142 --rc genhtml_function_coverage=1 00:32:38.142 --rc genhtml_legend=1 00:32:38.142 --rc geninfo_all_blocks=1 00:32:38.142 --rc geninfo_unexecuted_blocks=1 00:32:38.142 00:32:38.142 ' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:38.142 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:46.282 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:46.282 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:46.282 Found net devices under 0000:31:00.0: cvl_0_0 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:46.282 Found net devices under 0000:31:00.1: cvl_0_1 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.282 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:32:46.283 00:32:46.283 --- 10.0.0.2 ping statistics --- 00:32:46.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.283 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:32:46.283 00:32:46.283 --- 10.0.0.1 ping statistics --- 00:32:46.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.283 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:46.283 only one NIC for nvmf test 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:46.283 rmmod nvme_tcp 00:32:46.283 rmmod nvme_fabrics 00:32:46.283 rmmod nvme_keyring 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.283 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:48.194 00:32:48.194 real 0m10.061s 00:32:48.194 user 0m2.177s 00:32:48.194 sys 0m5.808s 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:48.194 18:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:48.194 ************************************ 00:32:48.194 END TEST nvmf_target_multipath 00:32:48.194 ************************************ 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:48.194 ************************************ 00:32:48.194 START TEST nvmf_zcopy 00:32:48.194 ************************************ 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:48.194 * Looking for test storage... 00:32:48.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:48.194 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:48.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.456 --rc genhtml_branch_coverage=1 00:32:48.456 --rc genhtml_function_coverage=1 00:32:48.456 --rc genhtml_legend=1 00:32:48.456 --rc geninfo_all_blocks=1 00:32:48.456 --rc geninfo_unexecuted_blocks=1 00:32:48.456 00:32:48.456 ' 00:32:48.456 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:48.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.456 --rc genhtml_branch_coverage=1 00:32:48.457 --rc genhtml_function_coverage=1 00:32:48.457 --rc genhtml_legend=1 00:32:48.457 --rc geninfo_all_blocks=1 00:32:48.457 --rc geninfo_unexecuted_blocks=1 00:32:48.457 00:32:48.457 ' 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:48.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.457 --rc genhtml_branch_coverage=1 00:32:48.457 --rc genhtml_function_coverage=1 00:32:48.457 --rc genhtml_legend=1 00:32:48.457 --rc geninfo_all_blocks=1 00:32:48.457 --rc geninfo_unexecuted_blocks=1 00:32:48.457 00:32:48.457 ' 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:48.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.457 --rc genhtml_branch_coverage=1 00:32:48.457 --rc genhtml_function_coverage=1 00:32:48.457 --rc genhtml_legend=1 00:32:48.457 --rc geninfo_all_blocks=1 00:32:48.457 --rc geninfo_unexecuted_blocks=1 00:32:48.457 00:32:48.457 ' 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:48.457 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:56.597 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:56.597 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:56.597 Found net devices under 0000:31:00.0: cvl_0_0 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.597 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:56.597 Found net devices under 0000:31:00.1: cvl_0_1 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:56.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:56.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:32:56.598 00:32:56.598 --- 10.0.0.2 ping statistics --- 00:32:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.598 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:56.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:56.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:32:56.598 00:32:56.598 --- 10.0.0.1 ping statistics --- 00:32:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.598 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1481550 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1481550 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1481550 ']' 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:56.598 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.598 [2024-10-08 18:48:50.026178] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:56.598 [2024-10-08 18:48:50.027324] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:32:56.598 [2024-10-08 18:48:50.027372] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.598 [2024-10-08 18:48:50.115909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.598 [2024-10-08 18:48:50.209718] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.598 [2024-10-08 18:48:50.209781] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.598 [2024-10-08 18:48:50.209790] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:56.598 [2024-10-08 18:48:50.209797] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:56.598 [2024-10-08 18:48:50.209803] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.598 [2024-10-08 18:48:50.210642] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.598 [2024-10-08 18:48:50.286970] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:56.598 [2024-10-08 18:48:50.287255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.859 [2024-10-08 18:48:50.887531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.859 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.859 [2024-10-08 18:48:50.915809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.120 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.120 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:57.120 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.120 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.120 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.120 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:57.120 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.120 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.120 malloc0 00:32:57.120 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.120 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:57.121 { 00:32:57.121 "params": { 00:32:57.121 "name": "Nvme$subsystem", 00:32:57.121 "trtype": "$TEST_TRANSPORT", 00:32:57.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.121 "adrfam": "ipv4", 00:32:57.121 "trsvcid": "$NVMF_PORT", 00:32:57.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.121 "hdgst": ${hdgst:-false}, 00:32:57.121 "ddgst": ${ddgst:-false} 00:32:57.121 }, 00:32:57.121 "method": "bdev_nvme_attach_controller" 00:32:57.121 } 00:32:57.121 EOF 00:32:57.121 )") 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:32:57.121 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:57.121 "params": { 00:32:57.121 "name": "Nvme1", 00:32:57.121 "trtype": "tcp", 00:32:57.121 "traddr": "10.0.0.2", 00:32:57.121 "adrfam": "ipv4", 00:32:57.121 "trsvcid": "4420", 00:32:57.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:57.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:57.121 "hdgst": false, 00:32:57.121 "ddgst": false 00:32:57.121 }, 00:32:57.121 "method": "bdev_nvme_attach_controller" 00:32:57.121 }' 00:32:57.121 [2024-10-08 18:48:51.035060] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:32:57.121 [2024-10-08 18:48:51.035128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481778 ] 00:32:57.121 [2024-10-08 18:48:51.116210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.382 [2024-10-08 18:48:51.213223] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.643 Running I/O for 10 seconds... 00:32:59.529 6263.00 IOPS, 48.93 MiB/s [2024-10-08T16:48:54.972Z] 6317.00 IOPS, 49.35 MiB/s [2024-10-08T16:48:55.914Z] 6322.67 IOPS, 49.40 MiB/s [2024-10-08T16:48:56.856Z] 6335.50 IOPS, 49.50 MiB/s [2024-10-08T16:48:57.798Z] 6562.00 IOPS, 51.27 MiB/s [2024-10-08T16:48:58.739Z] 7044.17 IOPS, 55.03 MiB/s [2024-10-08T16:48:59.823Z] 7388.71 IOPS, 57.72 MiB/s [2024-10-08T16:49:00.762Z] 7647.75 IOPS, 59.75 MiB/s [2024-10-08T16:49:01.701Z] 7849.22 IOPS, 61.32 MiB/s [2024-10-08T16:49:01.701Z] 8011.60 IOPS, 62.59 MiB/s 00:33:07.644 Latency(us) 00:33:07.644 [2024-10-08T16:49:01.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.644 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:07.644 Verification LBA range: start 0x0 length 0x1000 00:33:07.644 Nvme1n1 : 10.01 8015.54 62.62 0.00 0.00 15925.65 2157.23 28398.93 00:33:07.644 [2024-10-08T16:49:01.701Z] =================================================================================================================== 00:33:07.644 [2024-10-08T16:49:01.701Z] Total : 8015.54 62.62 0.00 0.00 15925.65 2157.23 28398.93 00:33:07.644 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1483756 00:33:07.644 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:07.644 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.644 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:07.644 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:07.644 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:07.644 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:07.644 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:07.644 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:07.644 { 00:33:07.644 "params": { 00:33:07.644 "name": "Nvme$subsystem", 00:33:07.644 "trtype": "$TEST_TRANSPORT", 00:33:07.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:07.644 "adrfam": "ipv4", 00:33:07.645 "trsvcid": "$NVMF_PORT", 00:33:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:07.645 "hdgst": ${hdgst:-false}, 00:33:07.645 "ddgst": ${ddgst:-false} 00:33:07.645 }, 00:33:07.645 "method": "bdev_nvme_attach_controller" 00:33:07.645 } 00:33:07.645 EOF 00:33:07.645 )") 00:33:07.905 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:07.905 [2024-10-08 18:49:01.703074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.703103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:07.906 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:07.906 18:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:07.906 "params": { 00:33:07.906 "name": "Nvme1", 00:33:07.906 "trtype": "tcp", 00:33:07.906 "traddr": "10.0.0.2", 00:33:07.906 "adrfam": "ipv4", 00:33:07.906 "trsvcid": "4420", 00:33:07.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:07.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:07.906 "hdgst": false, 00:33:07.906 "ddgst": false 00:33:07.906 }, 00:33:07.906 "method": "bdev_nvme_attach_controller" 00:33:07.906 }' 00:33:07.906 [2024-10-08 18:49:01.715038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.715047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.727037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.727044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.739037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.739045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.751037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.751044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.754462] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:33:07.906 [2024-10-08 18:49:01.754508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483756 ] 00:33:07.906 [2024-10-08 18:49:01.763037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.763044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.775037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.775045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.787037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.787045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.799037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.799044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.811037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.811044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.823038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.823046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.831315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.906 [2024-10-08 18:49:01.835037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.835045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.847038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.847047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.859038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.859049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.871037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.871049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.883037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.883046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.885418] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.906 [2024-10-08 18:49:01.895037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.895046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.907047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.907061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.919041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.919052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.931038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.931047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.943038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.943046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-08 18:49:01.955048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-08 18:49:01.955065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:01.967040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:01.967052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:01.979042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:01.979053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:01.991041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:01.991053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.003038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.003046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.015036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.015044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.027036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.027044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.039038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.039050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.051036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.051045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.063036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.063044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.075037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.075046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.087037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.087046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.099036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.099043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.111036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.111044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.123037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.123047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.135043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.135058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 Running I/O for 5 seconds... 00:33:08.167 [2024-10-08 18:49:02.150678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.150695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.164136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.164152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.178242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.178257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.191757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.191772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.206318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.206334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-08 18:49:02.220054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-08 18:49:02.220069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.234053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.234068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.247269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.247284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.260058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.260076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.274500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.274516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.287943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.287957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.302224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.302240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.315031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.315046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.328357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.328372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.342705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.342720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.356262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.356277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.370551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.370567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.383538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.383552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.398179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.398195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.411717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.411732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.426120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.426134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.439707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.439721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.453967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.453985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.467301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.467316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.428 [2024-10-08 18:49:02.481971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.428 [2024-10-08 18:49:02.481990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.495342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.495357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.510108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.510124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.523523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.523542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.538146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.538161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.551364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.551378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.566050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.566065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.579138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.579152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.592155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.592169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.606209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.606224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.619563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.619578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.634896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.634911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.648469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.648483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.663006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.663022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.676163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.676177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.690308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.690323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.703686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.703700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.717969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.717988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.731212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.731227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.689 [2024-10-08 18:49:02.744413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.689 [2024-10-08 18:49:02.744427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.758450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.758465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.771850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.771864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.786633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.786657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.800014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.800028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.814146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.814161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.827294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.827309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.840192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.840206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.853681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.853696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.867169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.867184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.880302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.880317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.894486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.894501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.908061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.908075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.922036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.922051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.935108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.935123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.948604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.948618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.962672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.962686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.975765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.975780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:02.990348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:02.990362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.950 [2024-10-08 18:49:03.003770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.950 [2024-10-08 18:49:03.003784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.018561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.018576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.031838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.031852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.046517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.046535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.060319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.060334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.074359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.074374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.088057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.088072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.102622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.102638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.115738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.115753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.130321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.130336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.143691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.143707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 18648.00 IOPS, 145.69 MiB/s [2024-10-08T16:49:03.268Z] [2024-10-08 18:49:03.157900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.157916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.171082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.171098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.183936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.183951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.197982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.197997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.211190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.211205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.211 [2024-10-08 18:49:03.224026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.211 [2024-10-08 18:49:03.224041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.212 [2024-10-08 18:49:03.238465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.212 [2024-10-08 18:49:03.238480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.212 [2024-10-08 18:49:03.251455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.212 [2024-10-08 18:49:03.251469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.212 [2024-10-08 18:49:03.266147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.212 [2024-10-08 18:49:03.266162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.472 [2024-10-08 18:49:03.279132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.472 [2024-10-08 18:49:03.279147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.472 [2024-10-08 18:49:03.291916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.472 [2024-10-08 18:49:03.291931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.472 [2024-10-08 18:49:03.306474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.472 [2024-10-08 18:49:03.306490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.320096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.320111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.334061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.334076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.347324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.347339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.362470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.362485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.375552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.375566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.390186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.390201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.403712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.403727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.418255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.418270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.431826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.431841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.446253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.446268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.459583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.459598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.474249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.474264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.487601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.487616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.502195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.502210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.473 [2024-10-08 18:49:03.515393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.473 [2024-10-08 18:49:03.515408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.530000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.530015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.543256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.543271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.556196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.556211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.570331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.570347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.583759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.583774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.598395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.598410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.611828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.611843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.626204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.626220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.639392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.639407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.654126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.654141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.667547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.667561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.682284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.682299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.695773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.695787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.710011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.710026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.723262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.723278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.735854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.735869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.750546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.750561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.763652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.763667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.734 [2024-10-08 18:49:03.778299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.734 [2024-10-08 18:49:03.778314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.995 [2024-10-08 18:49:03.791566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.995 [2024-10-08 18:49:03.791581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.995 [2024-10-08 18:49:03.805890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.805905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.819215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.819233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.832519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.832534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.846781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.846795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.860468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.860482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.874405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.874420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.887706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.887720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.902562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.902577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.915953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.915967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.930761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.930776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.944244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.944258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.958596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.958612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.971774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.971788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.986241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.986255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:03.999783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:03.999797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:04.014322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:04.014337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:04.027775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:04.027789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-10-08 18:49:04.042334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-10-08 18:49:04.042348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.256 [2024-10-08 18:49:04.055457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.256 [2024-10-08 18:49:04.055471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.256 [2024-10-08 18:49:04.070730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.256 [2024-10-08 18:49:04.070745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.256 [2024-10-08 18:49:04.084230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.256 [2024-10-08 18:49:04.084248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.256 [2024-10-08 18:49:04.098218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.256 [2024-10-08 18:49:04.098233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.256 [2024-10-08 18:49:04.111518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.256 [2024-10-08 18:49:04.111531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.256 [2024-10-08 18:49:04.126035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.256 [2024-10-08 18:49:04.126050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.256 [2024-10-08 18:49:04.139414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.256 [2024-10-08 18:49:04.139428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.256 18676.00 IOPS, 145.91 MiB/s [2024-10-08T16:49:04.313Z] [2024-10-08 18:49:04.153984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.256 [2024-10-08 18:49:04.153999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.256 [2024-10-08 18:49:04.167257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.167272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-10-08 18:49:04.180356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.180371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-10-08 18:49:04.194435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.194450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-10-08 18:49:04.207664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.207679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-10-08 18:49:04.222900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.222915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-10-08 18:49:04.236170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.236184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-10-08 18:49:04.250321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.250336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-10-08 18:49:04.263782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.263798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-10-08 18:49:04.278069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.278084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-10-08 18:49:04.291532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.291547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-10-08 18:49:04.306338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-10-08 18:49:04.306352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.319764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.319778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.334343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.334358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.347652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.347671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.362411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.362425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.375726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.375741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.389979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.389994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.403040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.403055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.416133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.416147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.430127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.430141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.443344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.443358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.458259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.458274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.471534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.471548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.486477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.486492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.499947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.499963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.514106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.514121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.527668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.527682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.542262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.542277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.555164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.555179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.517 [2024-10-08 18:49:04.567902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.517 [2024-10-08 18:49:04.567916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.582699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.582715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.596130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.596144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.610276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.610290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.623501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.623515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.638187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.638201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.651711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.651726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.666147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.666162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.679276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.679291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.691695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.691709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.706500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.706515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.720013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.720027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.734416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.734431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.747750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.747764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.762610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.762625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.776085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.776100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.790654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.790669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.803879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.803894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.818154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.818169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-10-08 18:49:04.831500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-10-08 18:49:04.831515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.846533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.846549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.859622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.859637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.874503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.874519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.887669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.887684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.902565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.902580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.916057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.916072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.930204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.930219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.943399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.943413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.958757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.958772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.972144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.972159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.986583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.986598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:04.999573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:04.999588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:05.014040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:05.014054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:05.027236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:05.027250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:05.040260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:05.040277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:05.054514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:05.054529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:05.067798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:05.067813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:05.082026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:05.082041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.039 [2024-10-08 18:49:05.095068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.039 [2024-10-08 18:49:05.095083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.107761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.107776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.122776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.122791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.136030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.136044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 18675.33 IOPS, 145.90 MiB/s [2024-10-08T16:49:05.358Z] [2024-10-08 18:49:05.150153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.150169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.163525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.163539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.178192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.178207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.191285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.191301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.204476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.204490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.218796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.218811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.232387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.232402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.246615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.246630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.259553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.259567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.274341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.274355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.287561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.287575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.302125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.302140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.315739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.315754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.330777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.330793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.301 [2024-10-08 18:49:05.344127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.301 [2024-10-08 18:49:05.344142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.358792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.358808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.372148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.372163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.386224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.386244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.399632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.399647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.414477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.414494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.428051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.428066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.442334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.442348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.455654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.455669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.469889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.469904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.482799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.482814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.496362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.496377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.510909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.510923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.524249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.524263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.538695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.538709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.551847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.551862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.565945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.565959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.579706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.579721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.594239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.594254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.562 [2024-10-08 18:49:05.607442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.562 [2024-10-08 18:49:05.607456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.622380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.622395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.635625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.635639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.650205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.650223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.663730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.663744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.678713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.678727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.692015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.692029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.706420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.706435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.719699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.719713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.734060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.734074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.747225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.747239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.759417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.823 [2024-10-08 18:49:05.759431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.823 [2024-10-08 18:49:05.774347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.824 [2024-10-08 18:49:05.774362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.824 [2024-10-08 18:49:05.787730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.824 [2024-10-08 18:49:05.787743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.824 [2024-10-08 18:49:05.802793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.824 [2024-10-08 18:49:05.802807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.824 [2024-10-08 18:49:05.816224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.824 [2024-10-08 18:49:05.816238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.824 [2024-10-08 18:49:05.830496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.824 [2024-10-08 18:49:05.830511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.824 [2024-10-08 18:49:05.843937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.824 [2024-10-08 18:49:05.843951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.824 [2024-10-08 18:49:05.858489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.824 [2024-10-08 18:49:05.858503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.824 [2024-10-08 18:49:05.871886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.824 [2024-10-08 18:49:05.871901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:05.886210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:05.886225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:05.899404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:05.899418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:05.914089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:05.914108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:05.927152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:05.927167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:05.939938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:05.939952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:05.954125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:05.954139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:05.967636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:05.967650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:05.981748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:05.981763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:05.994702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:05.994716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:06.007878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:06.007892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:06.022464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:06.022478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:06.036078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:06.036092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:06.050399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:06.050414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:06.063848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:06.063862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:06.078201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:06.078216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:06.091533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:06.091547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:06.106615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:06.106630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:06.119613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:06.119627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.085 [2024-10-08 18:49:06.134178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.085 [2024-10-08 18:49:06.134193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.147708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.147723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 18672.25 IOPS, 145.88 MiB/s [2024-10-08T16:49:06.403Z] [2024-10-08 18:49:06.162451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.162465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.175824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.175839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.190385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.190400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.203563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.203577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.217839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.217854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.230882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.230897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.243915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.243931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.258632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.258648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.271888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.271902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.286031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.286046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.299231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.299245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.312468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.312482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.326714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.326728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.339767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.339781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.353850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.353865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.367563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.367577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.382239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.382253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.346 [2024-10-08 18:49:06.395270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.346 [2024-10-08 18:49:06.395285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.408529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.408545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.422831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.422846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.436274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.436289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.450118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.450133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.463256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.463270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.476203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.476217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.490254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.490268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.503614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.503628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.517748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.517762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.530849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.530864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.543609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.543623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.558510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.558525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.571490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.571504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.586487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.586502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.600069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.600084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.614314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.614329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.627834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.627848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.641899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.641914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.608 [2024-10-08 18:49:06.655299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.608 [2024-10-08 18:49:06.655314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.869 [2024-10-08 18:49:06.668258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.869 [2024-10-08 18:49:06.668274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.869 [2024-10-08 18:49:06.682430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.869 [2024-10-08 18:49:06.682445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.869 [2024-10-08 18:49:06.695374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.869 [2024-10-08 18:49:06.695389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.869 [2024-10-08 18:49:06.710216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.869 [2024-10-08 18:49:06.710230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.869 [2024-10-08 18:49:06.723818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.869 [2024-10-08 18:49:06.723833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.869 [2024-10-08 18:49:06.738056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.869 [2024-10-08 18:49:06.738071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.869 [2024-10-08 18:49:06.751383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.869 [2024-10-08 18:49:06.751397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.869 [2024-10-08 18:49:06.766208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.869 [2024-10-08 18:49:06.766223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.869 [2024-10-08 18:49:06.779992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.869 [2024-10-08 18:49:06.780006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.870 [2024-10-08 18:49:06.793932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.870 [2024-10-08 18:49:06.793947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.870 [2024-10-08 18:49:06.807349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.870 [2024-10-08 18:49:06.807363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.870 [2024-10-08 18:49:06.822537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.870 [2024-10-08 18:49:06.822552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.870 [2024-10-08 18:49:06.835940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.870 [2024-10-08 18:49:06.835954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.870 [2024-10-08 18:49:06.850334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.870 [2024-10-08 18:49:06.850349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.870 [2024-10-08 18:49:06.863691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.870 [2024-10-08 18:49:06.863705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.870 [2024-10-08 18:49:06.878292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.870 [2024-10-08 18:49:06.878307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.870 [2024-10-08 18:49:06.891811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.870 [2024-10-08 18:49:06.891826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.870 [2024-10-08 18:49:06.906366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.870 [2024-10-08 18:49:06.906381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.870 [2024-10-08 18:49:06.920004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.870 [2024-10-08 18:49:06.920019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.131 [2024-10-08 18:49:06.934317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.131 [2024-10-08 18:49:06.934332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.131 [2024-10-08 18:49:06.947481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.131 [2024-10-08 18:49:06.947495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.131 [2024-10-08 18:49:06.962176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.131 [2024-10-08 18:49:06.962191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.131 [2024-10-08 18:49:06.975239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.131 [2024-10-08 18:49:06.975254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.131 [2024-10-08 18:49:06.988076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.131 [2024-10-08 18:49:06.988090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.131 [2024-10-08 18:49:07.002658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.131 [2024-10-08 18:49:07.002673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.131 [2024-10-08 18:49:07.016093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.131 [2024-10-08 18:49:07.016107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.131 [2024-10-08 18:49:07.030490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.131 [2024-10-08 18:49:07.030505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.131 [2024-10-08 18:49:07.044262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.131 [2024-10-08 18:49:07.044276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.131 [2024-10-08 18:49:07.058408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.058422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.132 [2024-10-08 18:49:07.071468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.071482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.132 [2024-10-08 18:49:07.086163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.086178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.132 [2024-10-08 18:49:07.099232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.099247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.132 [2024-10-08 18:49:07.112175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.112190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.132 [2024-10-08 18:49:07.126253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.126268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.132 [2024-10-08 18:49:07.139133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.139149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.132 [2024-10-08 18:49:07.152177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.152193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.132 18685.60 IOPS, 145.98 MiB/s 00:33:13.132 Latency(us) 00:33:13.132 [2024-10-08T16:49:07.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.132 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:13.132 Nvme1n1 : 5.01 18687.02 145.99 0.00 0.00 6844.19 2539.52 11359.57 00:33:13.132 [2024-10-08T16:49:07.189Z] =================================================================================================================== 00:33:13.132 [2024-10-08T16:49:07.189Z] Total : 18687.02 145.99 0.00 0.00 6844.19 2539.52 11359.57 00:33:13.132 [2024-10-08 18:49:07.163044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.163063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.132 [2024-10-08 18:49:07.175042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.175057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.132 [2024-10-08 18:49:07.187044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.132 [2024-10-08 18:49:07.187056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.393 [2024-10-08 18:49:07.199044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.393 [2024-10-08 18:49:07.199057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.393 [2024-10-08 18:49:07.211042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.393 [2024-10-08 18:49:07.211052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.393 [2024-10-08 18:49:07.223038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.393 [2024-10-08 18:49:07.223049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.393 [2024-10-08 18:49:07.235037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.393 [2024-10-08 18:49:07.235046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.393 [2024-10-08 18:49:07.247040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.393 [2024-10-08 18:49:07.247052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.393 [2024-10-08 18:49:07.259041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.393 [2024-10-08 18:49:07.259052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.393 [2024-10-08 18:49:07.271037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.393 [2024-10-08 18:49:07.271046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1483756) - No such process 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1483756 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:13.393 delay0 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.393 18:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:13.393 [2024-10-08 18:49:07.423413] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:19.976 Initializing NVMe Controllers 00:33:19.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:19.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:19.976 Initialization complete. Launching workers. 00:33:19.976 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1331 00:33:19.976 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1599, failed to submit 52 00:33:19.976 success 1445, unsuccessful 154, failed 0 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:19.976 rmmod nvme_tcp 00:33:19.976 rmmod nvme_fabrics 00:33:19.976 rmmod nvme_keyring 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1481550 ']' 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1481550 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1481550 ']' 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1481550 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:19.976 18:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1481550 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1481550' 00:33:20.236 killing process with pid 1481550 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1481550 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1481550 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:33:20.236 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:20.237 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:33:20.237 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.237 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.237 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.237 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.237 18:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.784 00:33:22.784 real 0m34.182s 00:33:22.784 user 0m43.354s 00:33:22.784 sys 0m12.438s 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:22.784 ************************************ 00:33:22.784 END TEST nvmf_zcopy 00:33:22.784 ************************************ 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.784 ************************************ 00:33:22.784 START TEST nvmf_nmic 00:33:22.784 ************************************ 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:22.784 * Looking for test storage... 00:33:22.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:22.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.784 --rc genhtml_branch_coverage=1 00:33:22.784 --rc genhtml_function_coverage=1 00:33:22.784 --rc genhtml_legend=1 00:33:22.784 --rc geninfo_all_blocks=1 00:33:22.784 --rc geninfo_unexecuted_blocks=1 00:33:22.784 00:33:22.784 ' 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:22.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.784 --rc genhtml_branch_coverage=1 00:33:22.784 --rc genhtml_function_coverage=1 00:33:22.784 --rc genhtml_legend=1 00:33:22.784 --rc geninfo_all_blocks=1 00:33:22.784 --rc geninfo_unexecuted_blocks=1 00:33:22.784 00:33:22.784 ' 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:22.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.784 --rc genhtml_branch_coverage=1 00:33:22.784 --rc genhtml_function_coverage=1 00:33:22.784 --rc genhtml_legend=1 00:33:22.784 --rc geninfo_all_blocks=1 00:33:22.784 --rc geninfo_unexecuted_blocks=1 00:33:22.784 00:33:22.784 ' 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:22.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.784 --rc genhtml_branch_coverage=1 00:33:22.784 --rc genhtml_function_coverage=1 00:33:22.784 --rc genhtml_legend=1 00:33:22.784 --rc geninfo_all_blocks=1 00:33:22.784 --rc geninfo_unexecuted_blocks=1 00:33:22.784 00:33:22.784 ' 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.784 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.785 18:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.920 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.920 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:30.920 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:30.920 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:30.921 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:30.921 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:30.921 Found net devices under 0000:31:00.0: cvl_0_0 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:30.921 Found net devices under 0000:31:00.1: cvl_0_1 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:30.921 18:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:30.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:33:30.921 00:33:30.921 --- 10.0.0.2 ping statistics --- 00:33:30.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.921 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:33:30.921 00:33:30.921 --- 10.0.0.1 ping statistics --- 00:33:30.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.921 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1490311 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1490311 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1490311 ']' 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:30.921 18:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.921 [2024-10-08 18:49:24.246965] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:30.921 [2024-10-08 18:49:24.248099] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:33:30.921 [2024-10-08 18:49:24.248148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.921 [2024-10-08 18:49:24.338563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:30.921 [2024-10-08 18:49:24.434663] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.921 [2024-10-08 18:49:24.434729] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.921 [2024-10-08 18:49:24.434737] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.921 [2024-10-08 18:49:24.434745] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.921 [2024-10-08 18:49:24.434751] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.921 [2024-10-08 18:49:24.436964] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.921 [2024-10-08 18:49:24.437125] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:30.921 [2024-10-08 18:49:24.437432] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:30.921 [2024-10-08 18:49:24.437436] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.921 [2024-10-08 18:49:24.524880] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:30.921 [2024-10-08 18:49:24.526117] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:30.921 [2024-10-08 18:49:24.526251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:30.921 [2024-10-08 18:49:24.526449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:30.921 [2024-10-08 18:49:24.526579] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.182 [2024-10-08 18:49:25.146500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.182 Malloc0 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.182 [2024-10-08 18:49:25.226819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:31.182 test case1: single bdev can't be used in multiple subsystems 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.182 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.444 [2024-10-08 18:49:25.262086] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:31.444 [2024-10-08 18:49:25.262113] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:31.444 [2024-10-08 18:49:25.262121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.444 request: 00:33:31.444 { 00:33:31.444 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:31.444 "namespace": { 00:33:31.444 "bdev_name": "Malloc0", 00:33:31.444 "no_auto_visible": false 00:33:31.444 }, 00:33:31.444 "method": "nvmf_subsystem_add_ns", 00:33:31.444 "req_id": 1 00:33:31.444 } 00:33:31.444 Got JSON-RPC error response 00:33:31.444 response: 00:33:31.444 { 00:33:31.444 "code": -32602, 00:33:31.444 "message": "Invalid parameters" 00:33:31.444 } 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:31.444 Adding namespace failed - expected result. 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:31.444 test case2: host connect to nvmf target in multiple paths 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.444 [2024-10-08 18:49:25.274231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.444 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:31.705 18:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:32.276 18:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:32.276 18:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:33:32.276 18:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:32.276 18:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:32.276 18:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:33:34.189 18:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:34.189 18:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:34.189 18:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:34.189 18:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:34.189 18:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:34.189 18:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:33:34.189 18:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:34.189 [global] 00:33:34.189 thread=1 00:33:34.189 invalidate=1 00:33:34.189 rw=write 00:33:34.189 time_based=1 00:33:34.189 runtime=1 00:33:34.189 ioengine=libaio 00:33:34.189 direct=1 00:33:34.189 bs=4096 00:33:34.189 iodepth=1 00:33:34.189 norandommap=0 00:33:34.189 numjobs=1 00:33:34.189 00:33:34.189 verify_dump=1 00:33:34.189 verify_backlog=512 00:33:34.189 verify_state_save=0 00:33:34.189 do_verify=1 00:33:34.189 verify=crc32c-intel 00:33:34.189 [job0] 00:33:34.189 filename=/dev/nvme0n1 00:33:34.189 Could not set queue depth (nvme0n1) 00:33:34.773 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:34.773 fio-3.35 00:33:34.773 Starting 1 thread 00:33:35.714 00:33:35.714 job0: (groupid=0, jobs=1): err= 0: pid=1491212: Tue Oct 8 18:49:29 2024 00:33:35.714 read: IOPS=655, BW=2621KiB/s (2684kB/s)(2624KiB/1001msec) 00:33:35.714 slat (nsec): min=6772, max=59790, avg=22456.46, stdev=7594.20 00:33:35.714 clat (usec): min=443, max=982, avg=785.51, stdev=78.56 00:33:35.714 lat (usec): min=464, max=1007, avg=807.97, stdev=81.95 00:33:35.714 clat percentiles (usec): 00:33:35.714 | 1.00th=[ 553], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 734], 00:33:35.714 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 816], 00:33:35.714 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 873], 95.00th=[ 889], 00:33:35.714 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 979], 00:33:35.714 | 99.99th=[ 979] 00:33:35.714 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:35.714 slat (nsec): min=9636, max=66902, avg=27795.30, stdev=9846.95 00:33:35.714 clat (usec): min=188, max=598, avg=420.29, stdev=63.14 00:33:35.714 lat (usec): min=221, max=631, avg=448.09, stdev=66.82 00:33:35.714 clat percentiles (usec): 00:33:35.714 | 1.00th=[ 258], 5.00th=[ 310], 10.00th=[ 330], 20.00th=[ 359], 00:33:35.714 | 30.00th=[ 388], 40.00th=[ 420], 50.00th=[ 437], 60.00th=[ 457], 00:33:35.714 | 70.00th=[ 465], 80.00th=[ 474], 90.00th=[ 486], 95.00th=[ 498], 00:33:35.714 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 603], 00:33:35.714 | 99.99th=[ 603] 00:33:35.714 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:35.714 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:35.714 lat (usec) : 250=0.48%, 500=58.04%, 750=12.74%, 1000=28.75% 00:33:35.714 cpu : usr=2.40%, sys=4.40%, ctx=1680, majf=0, minf=1 00:33:35.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:35.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.714 issued rwts: total=656,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:35.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:35.714 00:33:35.714 Run status group 0 (all jobs): 00:33:35.714 READ: bw=2621KiB/s (2684kB/s), 2621KiB/s-2621KiB/s (2684kB/s-2684kB/s), io=2624KiB (2687kB), run=1001-1001msec 00:33:35.714 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:33:35.714 00:33:35.714 Disk stats (read/write): 00:33:35.714 nvme0n1: ios=580/1024, merge=0/0, ticks=489/412, in_queue=901, util=94.09% 00:33:35.714 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:35.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.974 rmmod nvme_tcp 00:33:35.974 rmmod nvme_fabrics 00:33:35.974 rmmod nvme_keyring 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1490311 ']' 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1490311 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1490311 ']' 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1490311 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:35.974 18:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1490311 00:33:35.974 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:35.974 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:35.974 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1490311' 00:33:35.974 killing process with pid 1490311 00:33:35.974 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1490311 00:33:35.975 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1490311 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.235 18:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.779 00:33:38.779 real 0m15.892s 00:33:38.779 user 0m36.940s 00:33:38.779 sys 0m7.523s 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:38.779 ************************************ 00:33:38.779 END TEST nvmf_nmic 00:33:38.779 ************************************ 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:38.779 ************************************ 00:33:38.779 START TEST nvmf_fio_target 00:33:38.779 ************************************ 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:38.779 * Looking for test storage... 00:33:38.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:38.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.779 --rc genhtml_branch_coverage=1 00:33:38.779 --rc genhtml_function_coverage=1 00:33:38.779 --rc genhtml_legend=1 00:33:38.779 --rc geninfo_all_blocks=1 00:33:38.779 --rc geninfo_unexecuted_blocks=1 00:33:38.779 00:33:38.779 ' 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:38.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.779 --rc genhtml_branch_coverage=1 00:33:38.779 --rc genhtml_function_coverage=1 00:33:38.779 --rc genhtml_legend=1 00:33:38.779 --rc geninfo_all_blocks=1 00:33:38.779 --rc geninfo_unexecuted_blocks=1 00:33:38.779 00:33:38.779 ' 00:33:38.779 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:38.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.779 --rc genhtml_branch_coverage=1 00:33:38.779 --rc genhtml_function_coverage=1 00:33:38.779 --rc genhtml_legend=1 00:33:38.779 --rc geninfo_all_blocks=1 00:33:38.779 --rc geninfo_unexecuted_blocks=1 00:33:38.779 00:33:38.779 ' 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:38.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.780 --rc genhtml_branch_coverage=1 00:33:38.780 --rc genhtml_function_coverage=1 00:33:38.780 --rc genhtml_legend=1 00:33:38.780 --rc geninfo_all_blocks=1 00:33:38.780 --rc geninfo_unexecuted_blocks=1 00:33:38.780 00:33:38.780 ' 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.780 18:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:46.925 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:46.925 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:46.925 Found net devices under 0000:31:00.0: cvl_0_0 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:46.925 Found net devices under 0000:31:00.1: cvl_0_1 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.925 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.926 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.926 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.926 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.926 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.926 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.926 18:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:33:46.926 00:33:46.926 --- 10.0.0.2 ping statistics --- 00:33:46.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.926 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:33:46.926 00:33:46.926 --- 10.0.0.1 ping statistics --- 00:33:46.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.926 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1495890 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1495890 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1495890 ']' 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:46.926 18:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.926 [2024-10-08 18:49:40.311906] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:46.926 [2024-10-08 18:49:40.313059] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:33:46.926 [2024-10-08 18:49:40.313108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.926 [2024-10-08 18:49:40.404959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:46.926 [2024-10-08 18:49:40.499047] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.926 [2024-10-08 18:49:40.499109] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.926 [2024-10-08 18:49:40.499118] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.926 [2024-10-08 18:49:40.499125] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.926 [2024-10-08 18:49:40.499139] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.926 [2024-10-08 18:49:40.501310] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.926 [2024-10-08 18:49:40.501476] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:46.926 [2024-10-08 18:49:40.501637] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:46.926 [2024-10-08 18:49:40.501638] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.926 [2024-10-08 18:49:40.594223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:46.926 [2024-10-08 18:49:40.594852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:46.926 [2024-10-08 18:49:40.595267] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:46.926 [2024-10-08 18:49:40.595766] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:46.926 [2024-10-08 18:49:40.595816] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:47.188 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:47.188 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:33:47.188 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:47.188 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:47.188 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.188 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.188 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:47.449 [2024-10-08 18:49:41.354658] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.449 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:47.709 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:47.709 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:47.971 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:47.971 18:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.231 18:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:48.231 18:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.231 18:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:48.231 18:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:48.492 18:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.752 18:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:48.752 18:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.013 18:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:49.013 18:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.013 18:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:49.013 18:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:49.274 18:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:49.535 18:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:49.535 18:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:49.535 18:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:49.535 18:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:49.795 18:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.055 [2024-10-08 18:49:43.918623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.055 18:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:50.315 18:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:50.315 18:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:50.887 18:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:50.887 18:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:33:50.887 18:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:50.887 18:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:33:50.887 18:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:33:50.887 18:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:33:52.800 18:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:52.800 18:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:52.800 18:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:52.800 18:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:33:52.801 18:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:52.801 18:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:33:52.801 18:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:52.801 [global] 00:33:52.801 thread=1 00:33:52.801 invalidate=1 00:33:52.801 rw=write 00:33:52.801 time_based=1 00:33:52.801 runtime=1 00:33:52.801 ioengine=libaio 00:33:52.801 direct=1 00:33:52.801 bs=4096 00:33:52.801 iodepth=1 00:33:52.801 norandommap=0 00:33:52.801 numjobs=1 00:33:52.801 00:33:52.801 verify_dump=1 00:33:52.801 verify_backlog=512 00:33:52.801 verify_state_save=0 00:33:52.801 do_verify=1 00:33:52.801 verify=crc32c-intel 00:33:52.801 [job0] 00:33:52.801 filename=/dev/nvme0n1 00:33:52.801 [job1] 00:33:52.801 filename=/dev/nvme0n2 00:33:53.101 [job2] 00:33:53.101 filename=/dev/nvme0n3 00:33:53.101 [job3] 00:33:53.101 filename=/dev/nvme0n4 00:33:53.101 Could not set queue depth (nvme0n1) 00:33:53.101 Could not set queue depth (nvme0n2) 00:33:53.101 Could not set queue depth (nvme0n3) 00:33:53.101 Could not set queue depth (nvme0n4) 00:33:53.365 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.365 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.365 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.365 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.365 fio-3.35 00:33:53.365 Starting 4 threads 00:33:54.770 00:33:54.770 job0: (groupid=0, jobs=1): err= 0: pid=1497340: Tue Oct 8 18:49:48 2024 00:33:54.770 read: IOPS=498, BW=1994KiB/s (2042kB/s)(1996KiB/1001msec) 00:33:54.770 slat (nsec): min=6441, max=47967, avg=26899.39, stdev=6430.40 00:33:54.770 clat (usec): min=460, max=41668, avg=1471.74, stdev=5060.46 00:33:54.770 lat (usec): min=467, max=41696, avg=1498.64, stdev=5060.65 00:33:54.770 clat percentiles (usec): 00:33:54.770 | 1.00th=[ 490], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 709], 00:33:54.770 | 30.00th=[ 758], 40.00th=[ 807], 50.00th=[ 840], 60.00th=[ 873], 00:33:54.770 | 70.00th=[ 914], 80.00th=[ 963], 90.00th=[ 1004], 95.00th=[ 1045], 00:33:54.770 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:33:54.770 | 99.99th=[41681] 00:33:54.770 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:33:54.770 slat (usec): min=9, max=10930, avg=51.94, stdev=481.87 00:33:54.770 clat (usec): min=112, max=950, avg=420.29, stdev=179.48 00:33:54.770 lat (usec): min=122, max=11554, avg=472.23, stdev=524.95 00:33:54.770 clat percentiles (usec): 00:33:54.770 | 1.00th=[ 119], 5.00th=[ 130], 10.00th=[ 149], 20.00th=[ 262], 00:33:54.770 | 30.00th=[ 318], 40.00th=[ 375], 50.00th=[ 420], 60.00th=[ 474], 00:33:54.770 | 70.00th=[ 523], 80.00th=[ 578], 90.00th=[ 652], 95.00th=[ 717], 00:33:54.770 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 947], 99.95th=[ 947], 00:33:54.770 | 99.99th=[ 947] 00:33:54.770 bw ( KiB/s): min= 4096, max= 4096, per=45.16%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.770 lat (usec) : 250=9.40%, 500=24.43%, 750=29.08%, 1000=31.75% 00:33:54.770 lat (msec) : 2=4.55%, 50=0.79% 00:33:54.770 cpu : usr=2.20%, sys=3.70%, ctx=1015, majf=0, minf=1 00:33:54.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.770 issued rwts: total=499,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.770 job1: (groupid=0, jobs=1): err= 0: pid=1497343: Tue Oct 8 18:49:48 2024 00:33:54.770 read: IOPS=16, BW=67.9KiB/s (69.6kB/s)(68.0KiB/1001msec) 00:33:54.770 slat (nsec): min=27399, max=28176, avg=27603.76, stdev=228.12 00:33:54.770 clat (usec): min=961, max=42073, avg=39351.83, stdev=9900.92 00:33:54.770 lat (usec): min=989, max=42101, avg=39379.43, stdev=9900.77 00:33:54.770 clat percentiles (usec): 00:33:54.770 | 1.00th=[ 963], 5.00th=[ 963], 10.00th=[40633], 20.00th=[41157], 00:33:54.770 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:33:54.770 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:54.770 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:54.770 | 99.99th=[42206] 00:33:54.770 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:33:54.770 slat (nsec): min=9528, max=81359, avg=31959.02, stdev=10107.87 00:33:54.770 clat (usec): min=264, max=1060, avg=602.02, stdev=131.23 00:33:54.770 lat (usec): min=274, max=1096, avg=633.98, stdev=135.17 00:33:54.770 clat percentiles (usec): 00:33:54.770 | 1.00th=[ 289], 5.00th=[ 371], 10.00th=[ 433], 20.00th=[ 498], 00:33:54.770 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:33:54.770 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 824], 00:33:54.770 | 99.00th=[ 881], 99.50th=[ 947], 99.90th=[ 1057], 99.95th=[ 1057], 00:33:54.770 | 99.99th=[ 1057] 00:33:54.770 bw ( KiB/s): min= 4096, max= 4096, per=45.16%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.770 lat (usec) : 500=20.04%, 750=64.08%, 1000=12.67% 00:33:54.770 lat (msec) : 2=0.19%, 50=3.02% 00:33:54.770 cpu : usr=1.50%, sys=1.60%, ctx=530, majf=0, minf=1 00:33:54.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.770 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.770 job2: (groupid=0, jobs=1): err= 0: pid=1497358: Tue Oct 8 18:49:48 2024 00:33:54.770 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1010msec) 00:33:54.770 slat (nsec): min=26933, max=27688, avg=27181.12, stdev=221.36 00:33:54.770 clat (usec): min=1175, max=42071, avg=39399.49, stdev=9857.03 00:33:54.770 lat (usec): min=1202, max=42098, avg=39426.67, stdev=9857.06 00:33:54.770 clat percentiles (usec): 00:33:54.770 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41157], 20.00th=[41157], 00:33:54.770 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:33:54.770 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:54.770 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:54.770 | 99.99th=[42206] 00:33:54.770 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:33:54.770 slat (nsec): min=9230, max=54237, avg=32233.63, stdev=8915.69 00:33:54.770 clat (usec): min=176, max=3829, avg=624.14, stdev=198.89 00:33:54.770 lat (usec): min=187, max=3864, avg=656.37, stdev=201.17 00:33:54.770 clat percentiles (usec): 00:33:54.770 | 1.00th=[ 281], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 490], 00:33:54.770 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 668], 00:33:54.770 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 840], 00:33:54.770 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 3818], 99.95th=[ 3818], 00:33:54.770 | 99.99th=[ 3818] 00:33:54.770 bw ( KiB/s): min= 4096, max= 4096, per=45.16%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.771 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.771 lat (usec) : 250=0.57%, 500=20.42%, 750=59.74%, 1000=15.88% 00:33:54.771 lat (msec) : 2=0.19%, 4=0.19%, 50=3.02% 00:33:54.771 cpu : usr=1.19%, sys=1.88%, ctx=529, majf=0, minf=2 00:33:54.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.771 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.771 job3: (groupid=0, jobs=1): err= 0: pid=1497364: Tue Oct 8 18:49:48 2024 00:33:54.771 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:54.771 slat (nsec): min=7072, max=59980, avg=27994.94, stdev=3373.56 00:33:54.771 clat (usec): min=610, max=1273, avg=959.29, stdev=73.33 00:33:54.771 lat (usec): min=621, max=1302, avg=987.29, stdev=74.01 00:33:54.771 clat percentiles (usec): 00:33:54.771 | 1.00th=[ 725], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 930], 00:33:54.771 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:33:54.771 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:33:54.771 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1270], 99.95th=[ 1270], 00:33:54.771 | 99.99th=[ 1270] 00:33:54.771 write: IOPS=753, BW=3013KiB/s (3085kB/s)(3016KiB/1001msec); 0 zone resets 00:33:54.771 slat (nsec): min=9537, max=69394, avg=31287.61, stdev=11206.11 00:33:54.771 clat (usec): min=196, max=1334, avg=607.50, stdev=141.71 00:33:54.771 lat (usec): min=209, max=1371, avg=638.79, stdev=145.80 00:33:54.771 clat percentiles (usec): 00:33:54.771 | 1.00th=[ 318], 5.00th=[ 375], 10.00th=[ 433], 20.00th=[ 498], 00:33:54.771 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:33:54.771 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 783], 95.00th=[ 865], 00:33:54.771 | 99.00th=[ 1004], 99.50th=[ 1029], 99.90th=[ 1336], 99.95th=[ 1336], 00:33:54.771 | 99.99th=[ 1336] 00:33:54.771 bw ( KiB/s): min= 4096, max= 4096, per=45.16%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.771 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.771 lat (usec) : 250=0.16%, 500=12.24%, 750=40.05%, 1000=36.81% 00:33:54.771 lat (msec) : 2=10.74% 00:33:54.771 cpu : usr=2.50%, sys=5.10%, ctx=1268, majf=0, minf=1 00:33:54.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.771 issued rwts: total=512,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.771 00:33:54.771 Run status group 0 (all jobs): 00:33:54.771 READ: bw=4139KiB/s (4238kB/s), 67.3KiB/s-2046KiB/s (68.9kB/s-2095kB/s), io=4180KiB (4280kB), run=1001-1010msec 00:33:54.771 WRITE: bw=9069KiB/s (9287kB/s), 2028KiB/s-3013KiB/s (2076kB/s-3085kB/s), io=9160KiB (9380kB), run=1001-1010msec 00:33:54.771 00:33:54.771 Disk stats (read/write): 00:33:54.771 nvme0n1: ios=356/512, merge=0/0, ticks=648/143, in_queue=791, util=84.87% 00:33:54.771 nvme0n2: ios=62/512, merge=0/0, ticks=710/236, in_queue=946, util=88.46% 00:33:54.771 nvme0n3: ios=69/512, merge=0/0, ticks=550/250, in_queue=800, util=94.71% 00:33:54.771 nvme0n4: ios=548/512, merge=0/0, ticks=686/246, in_queue=932, util=94.75% 00:33:54.771 18:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:54.771 [global] 00:33:54.771 thread=1 00:33:54.771 invalidate=1 00:33:54.771 rw=randwrite 00:33:54.771 time_based=1 00:33:54.771 runtime=1 00:33:54.771 ioengine=libaio 00:33:54.771 direct=1 00:33:54.771 bs=4096 00:33:54.771 iodepth=1 00:33:54.771 norandommap=0 00:33:54.771 numjobs=1 00:33:54.771 00:33:54.771 verify_dump=1 00:33:54.771 verify_backlog=512 00:33:54.771 verify_state_save=0 00:33:54.771 do_verify=1 00:33:54.771 verify=crc32c-intel 00:33:54.771 [job0] 00:33:54.771 filename=/dev/nvme0n1 00:33:54.771 [job1] 00:33:54.771 filename=/dev/nvme0n2 00:33:54.771 [job2] 00:33:54.771 filename=/dev/nvme0n3 00:33:54.771 [job3] 00:33:54.771 filename=/dev/nvme0n4 00:33:54.771 Could not set queue depth (nvme0n1) 00:33:54.771 Could not set queue depth (nvme0n2) 00:33:54.771 Could not set queue depth (nvme0n3) 00:33:54.771 Could not set queue depth (nvme0n4) 00:33:55.033 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.033 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.033 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.033 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.033 fio-3.35 00:33:55.033 Starting 4 threads 00:33:56.439 00:33:56.439 job0: (groupid=0, jobs=1): err= 0: pid=1497771: Tue Oct 8 18:49:50 2024 00:33:56.439 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:56.439 slat (nsec): min=6998, max=62267, avg=25515.76, stdev=5824.63 00:33:56.439 clat (usec): min=380, max=1128, avg=812.96, stdev=130.49 00:33:56.439 lat (usec): min=407, max=1154, avg=838.48, stdev=130.92 00:33:56.439 clat percentiles (usec): 00:33:56.439 | 1.00th=[ 515], 5.00th=[ 586], 10.00th=[ 619], 20.00th=[ 685], 00:33:56.439 | 30.00th=[ 758], 40.00th=[ 791], 50.00th=[ 816], 60.00th=[ 865], 00:33:56.439 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 1004], 00:33:56.439 | 99.00th=[ 1037], 99.50th=[ 1090], 99.90th=[ 1123], 99.95th=[ 1123], 00:33:56.439 | 99.99th=[ 1123] 00:33:56.439 write: IOPS=1007, BW=4032KiB/s (4129kB/s)(4036KiB/1001msec); 0 zone resets 00:33:56.439 slat (nsec): min=9242, max=55657, avg=28782.59, stdev=10158.49 00:33:56.439 clat (usec): min=125, max=1195, avg=525.76, stdev=141.76 00:33:56.439 lat (usec): min=135, max=1230, avg=554.54, stdev=146.77 00:33:56.439 clat percentiles (usec): 00:33:56.439 | 1.00th=[ 217], 5.00th=[ 269], 10.00th=[ 334], 20.00th=[ 400], 00:33:56.439 | 30.00th=[ 457], 40.00th=[ 490], 50.00th=[ 537], 60.00th=[ 578], 00:33:56.439 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 734], 00:33:56.439 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 898], 99.95th=[ 1188], 00:33:56.439 | 99.99th=[ 1188] 00:33:56.439 bw ( KiB/s): min= 4096, max= 4096, per=41.85%, avg=4096.00, stdev= 0.00, samples=1 00:33:56.439 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:56.439 lat (usec) : 250=2.50%, 500=25.51%, 750=45.10%, 1000=25.12% 00:33:56.439 lat (msec) : 2=1.78% 00:33:56.439 cpu : usr=2.10%, sys=4.40%, ctx=1523, majf=0, minf=1 00:33:56.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.439 issued rwts: total=512,1009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.439 job1: (groupid=0, jobs=1): err= 0: pid=1497788: Tue Oct 8 18:49:50 2024 00:33:56.439 read: IOPS=18, BW=73.6KiB/s (75.3kB/s)(76.0KiB/1033msec) 00:33:56.439 slat (nsec): min=27916, max=28657, avg=28265.21, stdev=167.38 00:33:56.439 clat (usec): min=1027, max=41985, avg=39117.22, stdev=9233.58 00:33:56.439 lat (usec): min=1055, max=42014, avg=39145.49, stdev=9233.61 00:33:56.439 clat percentiles (usec): 00:33:56.439 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[40633], 20.00th=[41157], 00:33:56.440 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:56.440 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:33:56.440 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:56.440 | 99.99th=[42206] 00:33:56.440 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:33:56.440 slat (nsec): min=9218, max=54678, avg=31363.06, stdev=9721.09 00:33:56.440 clat (usec): min=196, max=914, avg=524.87, stdev=145.82 00:33:56.440 lat (usec): min=232, max=939, avg=556.23, stdev=147.26 00:33:56.440 clat percentiles (usec): 00:33:56.440 | 1.00th=[ 227], 5.00th=[ 310], 10.00th=[ 347], 20.00th=[ 379], 00:33:56.440 | 30.00th=[ 449], 40.00th=[ 482], 50.00th=[ 515], 60.00th=[ 562], 00:33:56.440 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 725], 95.00th=[ 783], 00:33:56.440 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 914], 99.95th=[ 914], 00:33:56.440 | 99.99th=[ 914] 00:33:56.440 bw ( KiB/s): min= 4096, max= 4096, per=41.85%, avg=4096.00, stdev= 0.00, samples=1 00:33:56.440 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:56.440 lat (usec) : 250=2.26%, 500=42.94%, 750=43.69%, 1000=7.53% 00:33:56.440 lat (msec) : 2=0.19%, 50=3.39% 00:33:56.440 cpu : usr=0.87%, sys=2.23%, ctx=534, majf=0, minf=1 00:33:56.440 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.440 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.440 job2: (groupid=0, jobs=1): err= 0: pid=1497806: Tue Oct 8 18:49:50 2024 00:33:56.440 read: IOPS=17, BW=71.8KiB/s (73.5kB/s)(72.0KiB/1003msec) 00:33:56.440 slat (nsec): min=7942, max=27564, avg=25859.00, stdev=4479.66 00:33:56.440 clat (usec): min=1151, max=42008, avg=38807.82, stdev=9401.27 00:33:56.440 lat (usec): min=1177, max=42035, avg=38833.68, stdev=9401.09 00:33:56.440 clat percentiles (usec): 00:33:56.440 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[40633], 20.00th=[41157], 00:33:56.440 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:56.440 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:33:56.440 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:56.440 | 99.99th=[42206] 00:33:56.440 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:33:56.440 slat (nsec): min=9026, max=57959, avg=29787.42, stdev=8870.56 00:33:56.440 clat (usec): min=129, max=903, avg=555.45, stdev=159.69 00:33:56.440 lat (usec): min=139, max=937, avg=585.24, stdev=162.51 00:33:56.440 clat percentiles (usec): 00:33:56.440 | 1.00th=[ 223], 5.00th=[ 306], 10.00th=[ 338], 20.00th=[ 392], 00:33:56.440 | 30.00th=[ 461], 40.00th=[ 519], 50.00th=[ 570], 60.00th=[ 611], 00:33:56.440 | 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 766], 95.00th=[ 799], 00:33:56.440 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 906], 99.95th=[ 906], 00:33:56.440 | 99.99th=[ 906] 00:33:56.440 bw ( KiB/s): min= 4096, max= 4096, per=41.85%, avg=4096.00, stdev= 0.00, samples=1 00:33:56.440 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:56.440 lat (usec) : 250=2.64%, 500=33.40%, 750=47.92%, 1000=12.64% 00:33:56.440 lat (msec) : 2=0.19%, 50=3.21% 00:33:56.440 cpu : usr=1.60%, sys=1.50%, ctx=530, majf=0, minf=2 00:33:56.440 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.440 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.440 job3: (groupid=0, jobs=1): err= 0: pid=1497813: Tue Oct 8 18:49:50 2024 00:33:56.440 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:33:56.440 slat (nsec): min=9379, max=28576, avg=26705.72, stdev=4339.26 00:33:56.440 clat (usec): min=1006, max=42047, avg=39329.57, stdev=9573.44 00:33:56.440 lat (usec): min=1015, max=42074, avg=39356.28, stdev=9577.76 00:33:56.440 clat percentiles (usec): 00:33:56.440 | 1.00th=[ 1004], 5.00th=[ 1004], 10.00th=[41157], 20.00th=[41157], 00:33:56.440 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:33:56.440 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:56.440 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:56.440 | 99.99th=[42206] 00:33:56.440 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:33:56.440 slat (nsec): min=9377, max=67047, avg=32142.61, stdev=9347.18 00:33:56.440 clat (usec): min=212, max=1077, avg=606.27, stdev=143.14 00:33:56.440 lat (usec): min=222, max=1111, avg=638.41, stdev=146.89 00:33:56.440 clat percentiles (usec): 00:33:56.440 | 1.00th=[ 241], 5.00th=[ 347], 10.00th=[ 408], 20.00th=[ 490], 00:33:56.440 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 644], 00:33:56.440 | 70.00th=[ 685], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 816], 00:33:56.440 | 99.00th=[ 898], 99.50th=[ 988], 99.90th=[ 1074], 99.95th=[ 1074], 00:33:56.440 | 99.99th=[ 1074] 00:33:56.440 bw ( KiB/s): min= 4096, max= 4096, per=41.85%, avg=4096.00, stdev= 0.00, samples=1 00:33:56.440 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:56.440 lat (usec) : 250=1.32%, 500=20.19%, 750=59.62%, 1000=15.09% 00:33:56.440 lat (msec) : 2=0.57%, 50=3.21% 00:33:56.440 cpu : usr=0.48%, sys=2.60%, ctx=531, majf=0, minf=1 00:33:56.440 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.440 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.440 00:33:56.440 Run status group 0 (all jobs): 00:33:56.440 READ: bw=2181KiB/s (2233kB/s), 69.2KiB/s-2046KiB/s (70.9kB/s-2095kB/s), io=2268KiB (2322kB), run=1001-1040msec 00:33:56.440 WRITE: bw=9788KiB/s (10.0MB/s), 1969KiB/s-4032KiB/s (2016kB/s-4129kB/s), io=9.94MiB (10.4MB), run=1001-1040msec 00:33:56.440 00:33:56.440 Disk stats (read/write): 00:33:56.440 nvme0n1: ios=561/691, merge=0/0, ticks=1021/360, in_queue=1381, util=84.37% 00:33:56.440 nvme0n2: ios=67/512, merge=0/0, ticks=845/215, in_queue=1060, util=88.69% 00:33:56.440 nvme0n3: ios=71/512, merge=0/0, ticks=644/210, in_queue=854, util=95.47% 00:33:56.440 nvme0n4: ios=42/512, merge=0/0, ticks=1372/231, in_queue=1603, util=94.77% 00:33:56.440 18:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:56.440 [global] 00:33:56.440 thread=1 00:33:56.440 invalidate=1 00:33:56.440 rw=write 00:33:56.440 time_based=1 00:33:56.440 runtime=1 00:33:56.440 ioengine=libaio 00:33:56.440 direct=1 00:33:56.440 bs=4096 00:33:56.440 iodepth=128 00:33:56.440 norandommap=0 00:33:56.440 numjobs=1 00:33:56.440 00:33:56.440 verify_dump=1 00:33:56.440 verify_backlog=512 00:33:56.440 verify_state_save=0 00:33:56.440 do_verify=1 00:33:56.440 verify=crc32c-intel 00:33:56.440 [job0] 00:33:56.440 filename=/dev/nvme0n1 00:33:56.440 [job1] 00:33:56.440 filename=/dev/nvme0n2 00:33:56.440 [job2] 00:33:56.440 filename=/dev/nvme0n3 00:33:56.440 [job3] 00:33:56.440 filename=/dev/nvme0n4 00:33:56.440 Could not set queue depth (nvme0n1) 00:33:56.440 Could not set queue depth (nvme0n2) 00:33:56.440 Could not set queue depth (nvme0n3) 00:33:56.440 Could not set queue depth (nvme0n4) 00:33:56.701 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.701 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.701 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.701 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.701 fio-3.35 00:33:56.701 Starting 4 threads 00:33:58.102 00:33:58.102 job0: (groupid=0, jobs=1): err= 0: pid=1498234: Tue Oct 8 18:49:51 2024 00:33:58.102 read: IOPS=4070, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:33:58.102 slat (nsec): min=894, max=7579.4k, avg=125300.85, stdev=706922.70 00:33:58.102 clat (usec): min=2288, max=28444, avg=15992.93, stdev=4173.73 00:33:58.102 lat (usec): min=2294, max=28454, avg=16118.23, stdev=4237.32 00:33:58.102 clat percentiles (usec): 00:33:58.102 | 1.00th=[ 6259], 5.00th=[ 7439], 10.00th=[ 9896], 20.00th=[13304], 00:33:58.102 | 30.00th=[14615], 40.00th=[15270], 50.00th=[16319], 60.00th=[17433], 00:33:58.102 | 70.00th=[18482], 80.00th=[19530], 90.00th=[20579], 95.00th=[21890], 00:33:58.102 | 99.00th=[23725], 99.50th=[25560], 99.90th=[27395], 99.95th=[27919], 00:33:58.102 | 99.99th=[28443] 00:33:58.102 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:33:58.102 slat (nsec): min=1530, max=6656.3k, avg=113739.24, stdev=561219.35 00:33:58.103 clat (usec): min=1300, max=29548, avg=15136.40, stdev=4639.65 00:33:58.103 lat (usec): min=1311, max=29551, avg=15250.14, stdev=4685.11 00:33:58.103 clat percentiles (usec): 00:33:58.103 | 1.00th=[ 3884], 5.00th=[ 5669], 10.00th=[ 6652], 20.00th=[11338], 00:33:58.103 | 30.00th=[14484], 40.00th=[15795], 50.00th=[16319], 60.00th=[16450], 00:33:58.103 | 70.00th=[16909], 80.00th=[17433], 90.00th=[20317], 95.00th=[21627], 00:33:58.103 | 99.00th=[26870], 99.50th=[28181], 99.90th=[29492], 99.95th=[29492], 00:33:58.103 | 99.99th=[29492] 00:33:58.103 bw ( KiB/s): min=15024, max=17744, per=16.66%, avg=16384.00, stdev=1923.33, samples=2 00:33:58.103 iops : min= 3756, max= 4436, avg=4096.00, stdev=480.83, samples=2 00:33:58.103 lat (msec) : 2=0.06%, 4=0.94%, 10=13.16%, 20=72.68%, 50=13.16% 00:33:58.103 cpu : usr=3.19%, sys=4.09%, ctx=399, majf=0, minf=2 00:33:58.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:58.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.103 issued rwts: total=4087,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.103 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.103 job1: (groupid=0, jobs=1): err= 0: pid=1498242: Tue Oct 8 18:49:51 2024 00:33:58.103 read: IOPS=8785, BW=34.3MiB/s (36.0MB/s)(34.5MiB/1005msec) 00:33:58.103 slat (nsec): min=988, max=9204.0k, avg=58036.10, stdev=441055.92 00:33:58.103 clat (usec): min=1861, max=16297, avg=7686.73, stdev=2118.68 00:33:58.103 lat (usec): min=2672, max=16647, avg=7744.77, stdev=2141.06 00:33:58.103 clat percentiles (usec): 00:33:58.103 | 1.00th=[ 3785], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6063], 00:33:58.103 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 7046], 60.00th=[ 7701], 00:33:58.103 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[11731], 00:33:58.103 | 99.00th=[13435], 99.50th=[15926], 99.90th=[15926], 99.95th=[15926], 00:33:58.103 | 99.99th=[16319] 00:33:58.103 write: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(36.0MiB/1005msec); 0 zone resets 00:33:58.103 slat (nsec): min=1672, max=6000.7k, avg=47899.37, stdev=309068.46 00:33:58.103 clat (usec): min=814, max=15836, avg=6444.44, stdev=1431.26 00:33:58.103 lat (usec): min=825, max=15844, avg=6492.34, stdev=1433.80 00:33:58.103 clat percentiles (usec): 00:33:58.103 | 1.00th=[ 2573], 5.00th=[ 3916], 10.00th=[ 4293], 20.00th=[ 5473], 00:33:58.103 | 30.00th=[ 6194], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6915], 00:33:58.103 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 8160], 00:33:58.103 | 99.00th=[10421], 99.50th=[11600], 99.90th=[12649], 99.95th=[12911], 00:33:58.103 | 99.99th=[15795] 00:33:58.103 bw ( KiB/s): min=36840, max=36864, per=37.46%, avg=36852.00, stdev=16.97, samples=2 00:33:58.103 iops : min= 9210, max= 9216, avg=9213.00, stdev= 4.24, samples=2 00:33:58.103 lat (usec) : 1000=0.03% 00:33:58.103 lat (msec) : 2=0.21%, 4=3.39%, 10=86.72%, 20=9.66% 00:33:58.103 cpu : usr=6.37%, sys=8.96%, ctx=735, majf=0, minf=1 00:33:58.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:33:58.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.103 issued rwts: total=8829,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.103 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.103 job2: (groupid=0, jobs=1): err= 0: pid=1498270: Tue Oct 8 18:49:51 2024 00:33:58.103 read: IOPS=6942, BW=27.1MiB/s (28.4MB/s)(27.2MiB/1003msec) 00:33:58.103 slat (nsec): min=983, max=10852k, avg=66006.89, stdev=428275.90 00:33:58.103 clat (usec): min=1006, max=18642, avg=8831.32, stdev=1942.59 00:33:58.103 lat (usec): min=2260, max=18656, avg=8897.32, stdev=1963.03 00:33:58.103 clat percentiles (usec): 00:33:58.103 | 1.00th=[ 3884], 5.00th=[ 5407], 10.00th=[ 6652], 20.00th=[ 7504], 00:33:58.103 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:33:58.103 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:33:58.103 | 99.00th=[15795], 99.50th=[15926], 99.90th=[18482], 99.95th=[18482], 00:33:58.103 | 99.99th=[18744] 00:33:58.103 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:33:58.103 slat (nsec): min=1689, max=11860k, avg=68476.14, stdev=440731.58 00:33:58.103 clat (usec): min=1507, max=19288, avg=9035.69, stdev=1741.18 00:33:58.103 lat (usec): min=4251, max=19297, avg=9104.17, stdev=1750.37 00:33:58.103 clat percentiles (usec): 00:33:58.103 | 1.00th=[ 5080], 5.00th=[ 6456], 10.00th=[ 7439], 20.00th=[ 8455], 00:33:58.103 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:33:58.103 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[11600], 00:33:58.103 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19268], 99.95th=[19268], 00:33:58.103 | 99.99th=[19268] 00:33:58.103 bw ( KiB/s): min=28672, max=28672, per=29.15%, avg=28672.00, stdev= 0.00, samples=2 00:33:58.103 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:33:58.103 lat (msec) : 2=0.01%, 4=0.63%, 10=82.70%, 20=16.65% 00:33:58.103 cpu : usr=5.29%, sys=7.49%, ctx=485, majf=0, minf=1 00:33:58.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:58.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.103 issued rwts: total=6963,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.103 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.103 job3: (groupid=0, jobs=1): err= 0: pid=1498282: Tue Oct 8 18:49:51 2024 00:33:58.103 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:33:58.103 slat (nsec): min=964, max=7135.4k, avg=111208.79, stdev=644473.03 00:33:58.103 clat (usec): min=4272, max=23542, avg=14145.24, stdev=2747.86 00:33:58.103 lat (usec): min=4277, max=23611, avg=14256.45, stdev=2803.12 00:33:58.103 clat percentiles (usec): 00:33:58.103 | 1.00th=[ 7570], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11994], 00:33:58.103 | 30.00th=[12649], 40.00th=[13829], 50.00th=[14353], 60.00th=[14746], 00:33:58.103 | 70.00th=[15533], 80.00th=[16450], 90.00th=[17433], 95.00th=[18482], 00:33:58.103 | 99.00th=[19792], 99.50th=[20055], 99.90th=[22414], 99.95th=[22938], 00:33:58.103 | 99.99th=[23462] 00:33:58.103 write: IOPS=4217, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1004msec); 0 zone resets 00:33:58.103 slat (nsec): min=1849, max=38056k, avg=123440.04, stdev=808008.15 00:33:58.103 clat (usec): min=418, max=42727, avg=15234.19, stdev=3289.40 00:33:58.103 lat (usec): min=3666, max=47039, avg=15357.63, stdev=3368.47 00:33:58.103 clat percentiles (usec): 00:33:58.103 | 1.00th=[ 4047], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[13566], 00:33:58.103 | 30.00th=[14615], 40.00th=[15795], 50.00th=[16188], 60.00th=[16319], 00:33:58.103 | 70.00th=[16909], 80.00th=[17171], 90.00th=[18482], 95.00th=[19792], 00:33:58.103 | 99.00th=[21627], 99.50th=[22414], 99.90th=[24249], 99.95th=[24249], 00:33:58.103 | 99.99th=[42730] 00:33:58.103 bw ( KiB/s): min=15944, max=16912, per=16.70%, avg=16428.00, stdev=684.48, samples=2 00:33:58.103 iops : min= 3986, max= 4228, avg=4107.00, stdev=171.12, samples=2 00:33:58.103 lat (usec) : 500=0.01% 00:33:58.103 lat (msec) : 4=0.35%, 10=6.04%, 20=91.12%, 50=2.48% 00:33:58.103 cpu : usr=3.29%, sys=4.79%, ctx=401, majf=0, minf=1 00:33:58.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:58.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.103 issued rwts: total=4096,4234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.103 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.103 00:33:58.103 Run status group 0 (all jobs): 00:33:58.103 READ: bw=93.2MiB/s (97.7MB/s), 15.9MiB/s-34.3MiB/s (16.7MB/s-36.0MB/s), io=93.7MiB (98.2MB), run=1003-1005msec 00:33:58.103 WRITE: bw=96.1MiB/s (101MB/s), 15.9MiB/s-35.8MiB/s (16.7MB/s-37.6MB/s), io=96.5MiB (101MB), run=1003-1005msec 00:33:58.103 00:33:58.103 Disk stats (read/write): 00:33:58.103 nvme0n1: ios=3122/3340, merge=0/0, ticks=16498/17284, in_queue=33782, util=81.06% 00:33:58.103 nvme0n2: ios=6702/7168, merge=0/0, ticks=48973/43596, in_queue=92569, util=87.36% 00:33:58.103 nvme0n3: ios=5360/5632, merge=0/0, ticks=23837/22610, in_queue=46447, util=93.12% 00:33:58.103 nvme0n4: ios=3094/3390, merge=0/0, ticks=14707/16377, in_queue=31084, util=99.32% 00:33:58.103 18:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:58.103 [global] 00:33:58.103 thread=1 00:33:58.103 invalidate=1 00:33:58.103 rw=randwrite 00:33:58.103 time_based=1 00:33:58.103 runtime=1 00:33:58.103 ioengine=libaio 00:33:58.103 direct=1 00:33:58.103 bs=4096 00:33:58.103 iodepth=128 00:33:58.103 norandommap=0 00:33:58.103 numjobs=1 00:33:58.103 00:33:58.103 verify_dump=1 00:33:58.103 verify_backlog=512 00:33:58.103 verify_state_save=0 00:33:58.103 do_verify=1 00:33:58.103 verify=crc32c-intel 00:33:58.103 [job0] 00:33:58.103 filename=/dev/nvme0n1 00:33:58.103 [job1] 00:33:58.103 filename=/dev/nvme0n2 00:33:58.103 [job2] 00:33:58.103 filename=/dev/nvme0n3 00:33:58.103 [job3] 00:33:58.103 filename=/dev/nvme0n4 00:33:58.103 Could not set queue depth (nvme0n1) 00:33:58.103 Could not set queue depth (nvme0n2) 00:33:58.103 Could not set queue depth (nvme0n3) 00:33:58.103 Could not set queue depth (nvme0n4) 00:33:58.366 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:58.366 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:58.366 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:58.366 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:58.366 fio-3.35 00:33:58.366 Starting 4 threads 00:33:59.751 00:33:59.751 job0: (groupid=0, jobs=1): err= 0: pid=1498759: Tue Oct 8 18:49:53 2024 00:33:59.751 read: IOPS=7759, BW=30.3MiB/s (31.8MB/s)(30.5MiB/1006msec) 00:33:59.751 slat (nsec): min=956, max=7648.6k, avg=65152.53, stdev=516985.75 00:33:59.751 clat (usec): min=1131, max=19628, avg=8504.01, stdev=2209.34 00:33:59.751 lat (usec): min=2368, max=19631, avg=8569.16, stdev=2240.16 00:33:59.751 clat percentiles (usec): 00:33:59.751 | 1.00th=[ 4555], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 6915], 00:33:59.751 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8356], 00:33:59.751 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11731], 95.00th=[13173], 00:33:59.751 | 99.00th=[15008], 99.50th=[15664], 99.90th=[19530], 99.95th=[19530], 00:33:59.751 | 99.99th=[19530] 00:33:59.751 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec); 0 zone resets 00:33:59.751 slat (nsec): min=1561, max=6541.8k, avg=55949.12, stdev=396861.93 00:33:59.751 clat (usec): min=1181, max=15681, avg=7472.48, stdev=1853.13 00:33:59.751 lat (usec): min=1189, max=15683, avg=7528.43, stdev=1869.20 00:33:59.751 clat percentiles (usec): 00:33:59.751 | 1.00th=[ 3294], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5800], 00:33:59.751 | 30.00th=[ 6587], 40.00th=[ 7308], 50.00th=[ 7767], 60.00th=[ 8029], 00:33:59.751 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[10159], 95.00th=[10814], 00:33:59.751 | 99.00th=[11863], 99.50th=[12649], 99.90th=[15270], 99.95th=[15401], 00:33:59.751 | 99.99th=[15664] 00:33:59.751 bw ( KiB/s): min=32752, max=32768, per=30.29%, avg=32760.00, stdev=11.31, samples=2 00:33:59.751 iops : min= 8188, max= 8192, avg=8190.00, stdev= 2.83, samples=2 00:33:59.751 lat (msec) : 2=0.10%, 4=0.87%, 10=83.24%, 20=15.79% 00:33:59.751 cpu : usr=3.68%, sys=8.46%, ctx=605, majf=0, minf=1 00:33:59.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:59.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.751 issued rwts: total=7806,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.751 job1: (groupid=0, jobs=1): err= 0: pid=1498767: Tue Oct 8 18:49:53 2024 00:33:59.752 read: IOPS=7426, BW=29.0MiB/s (30.4MB/s)(29.1MiB/1003msec) 00:33:59.752 slat (nsec): min=948, max=4485.0k, avg=64322.05, stdev=407945.79 00:33:59.752 clat (usec): min=1337, max=14435, avg=8405.50, stdev=1169.59 00:33:59.752 lat (usec): min=4134, max=14463, avg=8469.82, stdev=1207.44 00:33:59.752 clat percentiles (usec): 00:33:59.752 | 1.00th=[ 5080], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7832], 00:33:59.752 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8291], 60.00th=[ 8455], 00:33:59.752 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[10552], 00:33:59.752 | 99.00th=[11469], 99.50th=[11863], 99.90th=[12256], 99.95th=[13304], 00:33:59.752 | 99.99th=[14484] 00:33:59.752 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:33:59.752 slat (nsec): min=1614, max=4280.2k, avg=63410.22, stdev=350608.10 00:33:59.752 clat (usec): min=3824, max=15332, avg=8344.94, stdev=1055.96 00:33:59.752 lat (usec): min=3833, max=15389, avg=8408.35, stdev=1095.20 00:33:59.752 clat percentiles (usec): 00:33:59.752 | 1.00th=[ 5407], 5.00th=[ 7308], 10.00th=[ 7635], 20.00th=[ 7898], 00:33:59.752 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8356], 00:33:59.752 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[10290], 00:33:59.752 | 99.00th=[12387], 99.50th=[13829], 99.90th=[15008], 99.95th=[15270], 00:33:59.752 | 99.99th=[15270] 00:33:59.752 bw ( KiB/s): min=30656, max=30784, per=28.41%, avg=30720.00, stdev=90.51, samples=2 00:33:59.752 iops : min= 7664, max= 7696, avg=7680.00, stdev=22.63, samples=2 00:33:59.752 lat (msec) : 2=0.01%, 4=0.05%, 10=92.29%, 20=7.65% 00:33:59.752 cpu : usr=4.39%, sys=6.89%, ctx=744, majf=0, minf=2 00:33:59.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:59.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.752 issued rwts: total=7449,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.752 job2: (groupid=0, jobs=1): err= 0: pid=1498775: Tue Oct 8 18:49:53 2024 00:33:59.752 read: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(24.7MiB/1043msec) 00:33:59.752 slat (nsec): min=948, max=5341.4k, avg=80111.26, stdev=507021.85 00:33:59.752 clat (usec): min=5757, max=50376, avg=10790.19, stdev=5450.64 00:33:59.752 lat (usec): min=5764, max=54508, avg=10870.30, stdev=5467.18 00:33:59.752 clat percentiles (usec): 00:33:59.752 | 1.00th=[ 6915], 5.00th=[ 7701], 10.00th=[ 8717], 20.00th=[ 9372], 00:33:59.752 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:33:59.752 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11994], 95.00th=[13042], 00:33:59.752 | 99.00th=[45351], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:33:59.752 | 99.99th=[50594] 00:33:59.752 write: IOPS=6381, BW=24.9MiB/s (26.1MB/s)(26.0MiB/1043msec); 0 zone resets 00:33:59.752 slat (nsec): min=1590, max=4679.6k, avg=69899.06, stdev=340904.12 00:33:59.752 clat (usec): min=5165, max=14159, avg=9519.43, stdev=1056.98 00:33:59.752 lat (usec): min=5174, max=14290, avg=9589.33, stdev=1075.92 00:33:59.752 clat percentiles (usec): 00:33:59.752 | 1.00th=[ 6194], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9110], 00:33:59.752 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9503], 00:33:59.752 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[11600], 00:33:59.752 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13829], 99.95th=[13829], 00:33:59.752 | 99.99th=[14222] 00:33:59.752 bw ( KiB/s): min=26168, max=27080, per=24.62%, avg=26624.00, stdev=644.88, samples=2 00:33:59.752 iops : min= 6542, max= 6770, avg=6656.00, stdev=161.22, samples=2 00:33:59.752 lat (msec) : 10=70.13%, 20=28.89%, 50=0.72%, 100=0.25% 00:33:59.752 cpu : usr=3.07%, sys=5.76%, ctx=770, majf=0, minf=1 00:33:59.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:59.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.752 issued rwts: total=6333,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.752 job3: (groupid=0, jobs=1): err= 0: pid=1498782: Tue Oct 8 18:49:53 2024 00:33:59.752 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:33:59.752 slat (nsec): min=920, max=3297.3k, avg=88273.10, stdev=409506.65 00:33:59.752 clat (usec): min=5319, max=13536, avg=11194.78, stdev=1055.93 00:33:59.752 lat (usec): min=5321, max=14560, avg=11283.05, stdev=1003.55 00:33:59.752 clat percentiles (usec): 00:33:59.752 | 1.00th=[ 8225], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10552], 00:33:59.752 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:33:59.752 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12387], 95.00th=[12780], 00:33:59.752 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13566], 99.95th=[13566], 00:33:59.752 | 99.99th=[13566] 00:33:59.752 write: IOPS=5659, BW=22.1MiB/s (23.2MB/s)(22.2MiB/1002msec); 0 zone resets 00:33:59.752 slat (nsec): min=1514, max=4167.5k, avg=85660.23, stdev=366579.78 00:33:59.752 clat (usec): min=828, max=35125, avg=11211.20, stdev=3576.99 00:33:59.752 lat (usec): min=3085, max=35139, avg=11296.86, stdev=3589.24 00:33:59.752 clat percentiles (usec): 00:33:59.752 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9241], 00:33:59.752 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[11338], 00:33:59.752 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12125], 95.00th=[18744], 00:33:59.752 | 99.00th=[28967], 99.50th=[30802], 99.90th=[34341], 99.95th=[34866], 00:33:59.752 | 99.99th=[34866] 00:33:59.752 bw ( KiB/s): min=20480, max=24576, per=20.83%, avg=22528.00, stdev=2896.31, samples=2 00:33:59.752 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:33:59.752 lat (usec) : 1000=0.01% 00:33:59.752 lat (msec) : 4=0.29%, 10=28.28%, 20=69.49%, 50=1.92% 00:33:59.752 cpu : usr=2.30%, sys=4.10%, ctx=822, majf=0, minf=1 00:33:59.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:59.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.752 issued rwts: total=5632,5671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.752 00:33:59.752 Run status group 0 (all jobs): 00:33:59.752 READ: bw=102MiB/s (107MB/s), 22.0MiB/s-30.3MiB/s (23.0MB/s-31.8MB/s), io=106MiB (111MB), run=1002-1043msec 00:33:59.752 WRITE: bw=106MiB/s (111MB/s), 22.1MiB/s-31.8MiB/s (23.2MB/s-33.4MB/s), io=110MiB (116MB), run=1002-1043msec 00:33:59.752 00:33:59.752 Disk stats (read/write): 00:33:59.752 nvme0n1: ios=6689/6659, merge=0/0, ticks=53896/47486, in_queue=101382, util=86.87% 00:33:59.752 nvme0n2: ios=6197/6479, merge=0/0, ticks=25267/24295, in_queue=49562, util=88.37% 00:33:59.752 nvme0n3: ios=5200/5632, merge=0/0, ticks=24454/23694, in_queue=48148, util=92.28% 00:33:59.752 nvme0n4: ios=4665/4639, merge=0/0, ticks=13366/13883, in_queue=27249, util=97.22% 00:33:59.752 18:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:59.752 18:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1499076 00:33:59.752 18:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:59.752 18:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:59.752 [global] 00:33:59.752 thread=1 00:33:59.752 invalidate=1 00:33:59.752 rw=read 00:33:59.752 time_based=1 00:33:59.752 runtime=10 00:33:59.752 ioengine=libaio 00:33:59.752 direct=1 00:33:59.752 bs=4096 00:33:59.752 iodepth=1 00:33:59.752 norandommap=1 00:33:59.752 numjobs=1 00:33:59.752 00:33:59.752 [job0] 00:33:59.752 filename=/dev/nvme0n1 00:33:59.752 [job1] 00:33:59.752 filename=/dev/nvme0n2 00:33:59.752 [job2] 00:33:59.752 filename=/dev/nvme0n3 00:33:59.752 [job3] 00:33:59.752 filename=/dev/nvme0n4 00:33:59.752 Could not set queue depth (nvme0n1) 00:33:59.752 Could not set queue depth (nvme0n2) 00:33:59.752 Could not set queue depth (nvme0n3) 00:33:59.752 Could not set queue depth (nvme0n4) 00:34:00.013 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.013 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.013 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.013 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.013 fio-3.35 00:34:00.013 Starting 4 threads 00:34:02.555 18:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:02.835 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=11718656, buflen=4096 00:34:02.835 fio: pid=1499290, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:02.835 18:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:03.134 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11141120, buflen=4096 00:34:03.134 fio: pid=1499288, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:03.134 18:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.134 18:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:03.134 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10649600, buflen=4096 00:34:03.134 fio: pid=1499273, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:03.134 18:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.134 18:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:03.402 18:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.402 18:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:03.402 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=516096, buflen=4096 00:34:03.402 fio: pid=1499281, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:03.402 00:34:03.402 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1499273: Tue Oct 8 18:49:57 2024 00:34:03.402 read: IOPS=874, BW=3497KiB/s (3581kB/s)(10.2MiB/2974msec) 00:34:03.402 slat (usec): min=6, max=26528, avg=42.61, stdev=558.77 00:34:03.402 clat (usec): min=590, max=41303, avg=1091.47, stdev=2359.64 00:34:03.402 lat (usec): min=615, max=49412, avg=1134.09, stdev=2516.35 00:34:03.402 clat percentiles (usec): 00:34:03.402 | 1.00th=[ 701], 5.00th=[ 775], 10.00th=[ 807], 20.00th=[ 840], 00:34:03.402 | 30.00th=[ 889], 40.00th=[ 922], 50.00th=[ 955], 60.00th=[ 988], 00:34:03.402 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1156], 00:34:03.402 | 99.00th=[ 1254], 99.50th=[ 1319], 99.90th=[41157], 99.95th=[41157], 00:34:03.402 | 99.99th=[41157] 00:34:03.402 bw ( KiB/s): min= 3712, max= 4168, per=38.24%, avg=3992.00, stdev=175.00, samples=5 00:34:03.402 iops : min= 928, max= 1042, avg=998.00, stdev=43.75, samples=5 00:34:03.402 lat (usec) : 750=2.65%, 1000=62.05% 00:34:03.402 lat (msec) : 2=34.91%, 50=0.35% 00:34:03.402 cpu : usr=0.94%, sys=3.80%, ctx=2606, majf=0, minf=1 00:34:03.402 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.402 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.402 issued rwts: total=2601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.402 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.402 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1499281: Tue Oct 8 18:49:57 2024 00:34:03.402 read: IOPS=39, BW=158KiB/s (162kB/s)(504KiB/3183msec) 00:34:03.402 slat (usec): min=8, max=9546, avg=142.78, stdev=979.45 00:34:03.402 clat (usec): min=457, max=42167, avg=24896.57, stdev=20357.21 00:34:03.402 lat (usec): min=465, max=50947, avg=25040.28, stdev=20481.05 00:34:03.402 clat percentiles (usec): 00:34:03.402 | 1.00th=[ 478], 5.00th=[ 586], 10.00th=[ 611], 20.00th=[ 693], 00:34:03.402 | 30.00th=[ 766], 40.00th=[ 1045], 50.00th=[41681], 60.00th=[41681], 00:34:03.402 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:03.402 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:03.402 | 99.99th=[42206] 00:34:03.402 bw ( KiB/s): min= 96, max= 496, per=1.55%, avg=162.67, stdev=163.30, samples=6 00:34:03.402 iops : min= 24, max= 124, avg=40.67, stdev=40.82, samples=6 00:34:03.402 lat (usec) : 500=1.57%, 750=24.41%, 1000=13.39% 00:34:03.402 lat (msec) : 2=1.57%, 50=58.27% 00:34:03.402 cpu : usr=0.03%, sys=0.09%, ctx=129, majf=0, minf=2 00:34:03.402 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.402 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.402 issued rwts: total=127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.402 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.402 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1499288: Tue Oct 8 18:49:57 2024 00:34:03.403 read: IOPS=974, BW=3898KiB/s (3992kB/s)(10.6MiB/2791msec) 00:34:03.403 slat (usec): min=7, max=9764, avg=33.30, stdev=229.13 00:34:03.403 clat (usec): min=464, max=1282, avg=976.91, stdev=82.62 00:34:03.403 lat (usec): min=477, max=10803, avg=1010.21, stdev=245.35 00:34:03.403 clat percentiles (usec): 00:34:03.403 | 1.00th=[ 750], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 914], 00:34:03.403 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 1004], 00:34:03.403 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1106], 00:34:03.403 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1270], 99.95th=[ 1270], 00:34:03.403 | 99.99th=[ 1287] 00:34:03.403 bw ( KiB/s): min= 3912, max= 4008, per=37.91%, avg=3958.40, stdev=41.34, samples=5 00:34:03.403 iops : min= 978, max= 1002, avg=989.60, stdev=10.33, samples=5 00:34:03.403 lat (usec) : 500=0.04%, 750=0.88%, 1000=59.24% 00:34:03.403 lat (msec) : 2=39.80% 00:34:03.403 cpu : usr=1.18%, sys=4.59%, ctx=2723, majf=0, minf=2 00:34:03.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.403 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.403 issued rwts: total=2721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.403 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1499290: Tue Oct 8 18:49:57 2024 00:34:03.403 read: IOPS=1099, BW=4396KiB/s (4502kB/s)(11.2MiB/2603msec) 00:34:03.403 slat (nsec): min=7057, max=59354, avg=24573.47, stdev=7019.42 00:34:03.403 clat (usec): min=334, max=41214, avg=870.42, stdev=775.45 00:34:03.403 lat (usec): min=362, max=41244, avg=895.00, stdev=775.90 00:34:03.403 clat percentiles (usec): 00:34:03.403 | 1.00th=[ 498], 5.00th=[ 578], 10.00th=[ 644], 20.00th=[ 701], 00:34:03.403 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 881], 00:34:03.403 | 70.00th=[ 979], 80.00th=[ 1045], 90.00th=[ 1106], 95.00th=[ 1156], 00:34:03.403 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1434], 99.95th=[ 1483], 00:34:03.403 | 99.99th=[41157] 00:34:03.403 bw ( KiB/s): min= 3712, max= 5272, per=42.39%, avg=4425.60, stdev=687.19, samples=5 00:34:03.403 iops : min= 928, max= 1318, avg=1106.40, stdev=171.80, samples=5 00:34:03.403 lat (usec) : 500=1.01%, 750=28.51%, 1000=43.22% 00:34:03.403 lat (msec) : 2=27.18%, 50=0.03% 00:34:03.403 cpu : usr=1.11%, sys=3.15%, ctx=2862, majf=0, minf=2 00:34:03.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.403 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.403 issued rwts: total=2862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.403 00:34:03.403 Run status group 0 (all jobs): 00:34:03.403 READ: bw=10.2MiB/s (10.7MB/s), 158KiB/s-4396KiB/s (162kB/s-4502kB/s), io=32.4MiB (34.0MB), run=2603-3183msec 00:34:03.403 00:34:03.403 Disk stats (read/write): 00:34:03.403 nvme0n1: ios=2596/0, merge=0/0, ticks=2561/0, in_queue=2561, util=93.39% 00:34:03.403 nvme0n2: ios=124/0, merge=0/0, ticks=3052/0, in_queue=3052, util=95.23% 00:34:03.403 nvme0n3: ios=2561/0, merge=0/0, ticks=2415/0, in_queue=2415, util=96.03% 00:34:03.403 nvme0n4: ios=2890/0, merge=0/0, ticks=2512/0, in_queue=2512, util=98.06% 00:34:03.664 18:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.664 18:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:03.664 18:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.664 18:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:03.925 18:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.925 18:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:04.185 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.185 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1499076 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:04.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:04.449 nvmf hotplug test: fio failed as expected 00:34:04.449 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:04.710 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:04.710 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:04.710 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:04.710 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:04.710 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:04.710 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:04.710 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:04.710 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:04.711 rmmod nvme_tcp 00:34:04.711 rmmod nvme_fabrics 00:34:04.711 rmmod nvme_keyring 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1495890 ']' 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1495890 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1495890 ']' 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1495890 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495890 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495890' 00:34:04.711 killing process with pid 1495890 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1495890 00:34:04.711 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1495890 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.971 18:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.882 18:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.882 00:34:06.882 real 0m28.592s 00:34:06.882 user 2m17.472s 00:34:06.882 sys 0m12.441s 00:34:06.882 18:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:06.882 18:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:06.882 ************************************ 00:34:06.882 END TEST nvmf_fio_target 00:34:06.882 ************************************ 00:34:07.143 18:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:07.143 18:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:07.143 18:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:07.143 18:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:07.143 ************************************ 00:34:07.143 START TEST nvmf_bdevio 00:34:07.143 ************************************ 00:34:07.143 18:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:07.143 * Looking for test storage... 00:34:07.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:07.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.143 --rc genhtml_branch_coverage=1 00:34:07.143 --rc genhtml_function_coverage=1 00:34:07.143 --rc genhtml_legend=1 00:34:07.143 --rc geninfo_all_blocks=1 00:34:07.143 --rc geninfo_unexecuted_blocks=1 00:34:07.143 00:34:07.143 ' 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:07.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.143 --rc genhtml_branch_coverage=1 00:34:07.143 --rc genhtml_function_coverage=1 00:34:07.143 --rc genhtml_legend=1 00:34:07.143 --rc geninfo_all_blocks=1 00:34:07.143 --rc geninfo_unexecuted_blocks=1 00:34:07.143 00:34:07.143 ' 00:34:07.143 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:07.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.143 --rc genhtml_branch_coverage=1 00:34:07.143 --rc genhtml_function_coverage=1 00:34:07.143 --rc genhtml_legend=1 00:34:07.143 --rc geninfo_all_blocks=1 00:34:07.143 --rc geninfo_unexecuted_blocks=1 00:34:07.143 00:34:07.143 ' 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:07.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.144 --rc genhtml_branch_coverage=1 00:34:07.144 --rc genhtml_function_coverage=1 00:34:07.144 --rc genhtml_legend=1 00:34:07.144 --rc geninfo_all_blocks=1 00:34:07.144 --rc geninfo_unexecuted_blocks=1 00:34:07.144 00:34:07.144 ' 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.144 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:07.406 18:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:15.548 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:15.548 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:15.548 Found net devices under 0000:31:00.0: cvl_0_0 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:15.548 Found net devices under 0000:31:00.1: cvl_0_1 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:15.548 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:15.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:34:15.549 00:34:15.549 --- 10.0.0.2 ping statistics --- 00:34:15.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.549 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:34:15.549 00:34:15.549 --- 10.0.0.1 ping statistics --- 00:34:15.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.549 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1504432 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1504432 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1504432 ']' 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:15.549 18:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.549 [2024-10-08 18:50:08.966742] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:15.549 [2024-10-08 18:50:08.967898] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:34:15.549 [2024-10-08 18:50:08.967947] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.549 [2024-10-08 18:50:09.057008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:15.549 [2024-10-08 18:50:09.147421] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.549 [2024-10-08 18:50:09.147480] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.549 [2024-10-08 18:50:09.147489] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.549 [2024-10-08 18:50:09.147496] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.549 [2024-10-08 18:50:09.147502] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.549 [2024-10-08 18:50:09.149493] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:34:15.549 [2024-10-08 18:50:09.149655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:34:15.549 [2024-10-08 18:50:09.149816] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:15.549 [2024-10-08 18:50:09.149816] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:34:15.549 [2024-10-08 18:50:09.255412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:15.549 [2024-10-08 18:50:09.255897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:15.549 [2024-10-08 18:50:09.256456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:15.549 [2024-10-08 18:50:09.256897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:15.549 [2024-10-08 18:50:09.256964] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.810 [2024-10-08 18:50:09.826686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.810 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.071 Malloc0 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.071 [2024-10-08 18:50:09.911055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:16.071 { 00:34:16.071 "params": { 00:34:16.071 "name": "Nvme$subsystem", 00:34:16.071 "trtype": "$TEST_TRANSPORT", 00:34:16.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.071 "adrfam": "ipv4", 00:34:16.071 "trsvcid": "$NVMF_PORT", 00:34:16.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.071 "hdgst": ${hdgst:-false}, 00:34:16.071 "ddgst": ${ddgst:-false} 00:34:16.071 }, 00:34:16.071 "method": "bdev_nvme_attach_controller" 00:34:16.071 } 00:34:16.071 EOF 00:34:16.071 )") 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:34:16.071 18:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:16.071 "params": { 00:34:16.071 "name": "Nvme1", 00:34:16.071 "trtype": "tcp", 00:34:16.071 "traddr": "10.0.0.2", 00:34:16.071 "adrfam": "ipv4", 00:34:16.071 "trsvcid": "4420", 00:34:16.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:16.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:16.071 "hdgst": false, 00:34:16.071 "ddgst": false 00:34:16.071 }, 00:34:16.071 "method": "bdev_nvme_attach_controller" 00:34:16.072 }' 00:34:16.072 [2024-10-08 18:50:09.968924] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:34:16.072 [2024-10-08 18:50:09.969003] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504711 ] 00:34:16.072 [2024-10-08 18:50:10.057284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:16.332 [2024-10-08 18:50:10.157896] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.332 [2024-10-08 18:50:10.158058] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.332 [2024-10-08 18:50:10.158078] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.332 I/O targets: 00:34:16.332 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:16.332 00:34:16.332 00:34:16.332 CUnit - A unit testing framework for C - Version 2.1-3 00:34:16.332 http://cunit.sourceforge.net/ 00:34:16.332 00:34:16.332 00:34:16.332 Suite: bdevio tests on: Nvme1n1 00:34:16.332 Test: blockdev write read block ...passed 00:34:16.593 Test: blockdev write zeroes read block ...passed 00:34:16.593 Test: blockdev write zeroes read no split ...passed 00:34:16.593 Test: blockdev write zeroes read split ...passed 00:34:16.593 Test: blockdev write zeroes read split partial ...passed 00:34:16.593 Test: blockdev reset ...[2024-10-08 18:50:10.450966] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.593 [2024-10-08 18:50:10.451061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0f000 (9): Bad file descriptor 00:34:16.593 [2024-10-08 18:50:10.463748] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:16.593 passed 00:34:16.593 Test: blockdev write read 8 blocks ...passed 00:34:16.593 Test: blockdev write read size > 128k ...passed 00:34:16.593 Test: blockdev write read invalid size ...passed 00:34:16.593 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:16.593 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:16.593 Test: blockdev write read max offset ...passed 00:34:16.593 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:16.593 Test: blockdev writev readv 8 blocks ...passed 00:34:16.854 Test: blockdev writev readv 30 x 1block ...passed 00:34:16.854 Test: blockdev writev readv block ...passed 00:34:16.854 Test: blockdev writev readv size > 128k ...passed 00:34:16.854 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:16.854 Test: blockdev comparev and writev ...[2024-10-08 18:50:10.766935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.854 [2024-10-08 18:50:10.766992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.854 [2024-10-08 18:50:10.767010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.854 [2024-10-08 18:50:10.767019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.854 [2024-10-08 18:50:10.767485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.854 [2024-10-08 18:50:10.767498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.854 [2024-10-08 18:50:10.767512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.855 [2024-10-08 18:50:10.767522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.855 [2024-10-08 18:50:10.768037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.855 [2024-10-08 18:50:10.768049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.855 [2024-10-08 18:50:10.768063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.855 [2024-10-08 18:50:10.768077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.855 [2024-10-08 18:50:10.768601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.855 [2024-10-08 18:50:10.768614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.855 [2024-10-08 18:50:10.768628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.855 [2024-10-08 18:50:10.768635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.855 passed 00:34:16.855 Test: blockdev nvme passthru rw ...passed 00:34:16.855 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:50:10.853658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.855 [2024-10-08 18:50:10.853675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.855 [2024-10-08 18:50:10.853905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.855 [2024-10-08 18:50:10.853916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.855 [2024-10-08 18:50:10.854179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.855 [2024-10-08 18:50:10.854191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.855 [2024-10-08 18:50:10.854460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.855 [2024-10-08 18:50:10.854471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.855 passed 00:34:16.855 Test: blockdev nvme admin passthru ...passed 00:34:17.115 Test: blockdev copy ...passed 00:34:17.115 00:34:17.115 Run Summary: Type Total Ran Passed Failed Inactive 00:34:17.115 suites 1 1 n/a 0 0 00:34:17.115 tests 23 23 23 0 0 00:34:17.115 asserts 152 152 152 0 n/a 00:34:17.115 00:34:17.115 Elapsed time = 1.197 seconds 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.116 rmmod nvme_tcp 00:34:17.116 rmmod nvme_fabrics 00:34:17.116 rmmod nvme_keyring 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1504432 ']' 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1504432 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1504432 ']' 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1504432 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:34:17.116 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:17.376 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1504432 00:34:17.376 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:34:17.376 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:34:17.376 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1504432' 00:34:17.376 killing process with pid 1504432 00:34:17.376 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1504432 00:34:17.376 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1504432 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.637 18:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.550 18:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.550 00:34:19.550 real 0m12.559s 00:34:19.550 user 0m9.989s 00:34:19.550 sys 0m6.599s 00:34:19.550 18:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:19.550 18:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:19.550 ************************************ 00:34:19.550 END TEST nvmf_bdevio 00:34:19.550 ************************************ 00:34:19.550 18:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:19.550 00:34:19.550 real 5m4.568s 00:34:19.550 user 10m17.866s 00:34:19.550 sys 2m5.773s 00:34:19.550 18:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:19.550 18:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:19.550 ************************************ 00:34:19.550 END TEST nvmf_target_core_interrupt_mode 00:34:19.550 ************************************ 00:34:19.810 18:50:13 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:19.810 18:50:13 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:19.810 18:50:13 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:19.810 18:50:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:19.810 ************************************ 00:34:19.810 START TEST nvmf_interrupt 00:34:19.810 ************************************ 00:34:19.810 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:19.810 * Looking for test storage... 00:34:19.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.810 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:19.810 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.811 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.072 --rc genhtml_branch_coverage=1 00:34:20.072 --rc genhtml_function_coverage=1 00:34:20.072 --rc genhtml_legend=1 00:34:20.072 --rc geninfo_all_blocks=1 00:34:20.072 --rc geninfo_unexecuted_blocks=1 00:34:20.072 00:34:20.072 ' 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.072 --rc genhtml_branch_coverage=1 00:34:20.072 --rc genhtml_function_coverage=1 00:34:20.072 --rc genhtml_legend=1 00:34:20.072 --rc geninfo_all_blocks=1 00:34:20.072 --rc geninfo_unexecuted_blocks=1 00:34:20.072 00:34:20.072 ' 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.072 --rc genhtml_branch_coverage=1 00:34:20.072 --rc genhtml_function_coverage=1 00:34:20.072 --rc genhtml_legend=1 00:34:20.072 --rc geninfo_all_blocks=1 00:34:20.072 --rc geninfo_unexecuted_blocks=1 00:34:20.072 00:34:20.072 ' 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.072 --rc genhtml_branch_coverage=1 00:34:20.072 --rc genhtml_function_coverage=1 00:34:20.072 --rc genhtml_legend=1 00:34:20.072 --rc geninfo_all_blocks=1 00:34:20.072 --rc geninfo_unexecuted_blocks=1 00:34:20.072 00:34:20.072 ' 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:20.072 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:20.073 18:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:28.214 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:28.214 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:28.214 Found net devices under 0000:31:00.0: cvl_0_0 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:28.214 Found net devices under 0000:31:00.1: cvl_0_1 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:28.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:34:28.214 00:34:28.214 --- 10.0.0.2 ping statistics --- 00:34:28.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.214 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:34:28.214 00:34:28.214 --- 10.0.0.1 ping statistics --- 00:34:28.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.214 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:34:28.214 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1509124 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1509124 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1509124 ']' 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:28.215 18:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.215 [2024-10-08 18:50:21.599631] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:28.215 [2024-10-08 18:50:21.600819] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:34:28.215 [2024-10-08 18:50:21.600873] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.215 [2024-10-08 18:50:21.691055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:28.215 [2024-10-08 18:50:21.784562] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.215 [2024-10-08 18:50:21.784621] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.215 [2024-10-08 18:50:21.784629] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.215 [2024-10-08 18:50:21.784637] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.215 [2024-10-08 18:50:21.784643] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.215 [2024-10-08 18:50:21.785762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.215 [2024-10-08 18:50:21.785764] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.215 [2024-10-08 18:50:21.861998] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:28.215 [2024-10-08 18:50:21.862600] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:28.215 [2024-10-08 18:50:21.862910] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:28.476 5000+0 records in 00:34:28.476 5000+0 records out 00:34:28.476 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0190513 s, 537 MB/s 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.476 AIO0 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.476 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.476 [2024-10-08 18:50:22.530833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.737 [2024-10-08 18:50:22.583266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1509124 0 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1509124 0 idle 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1509124 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:28.737 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1509124 -w 256 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1509124 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.34 reactor_0' 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1509124 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.34 reactor_0 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1509124 1 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1509124 1 idle 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1509124 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1509124 -w 256 00:34:28.738 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1509138 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1509138 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1509494 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1509124 0 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1509124 0 busy 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1509124 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1509124 -w 256 00:34:28.999 18:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:29.261 18:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1509124 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.35 reactor_0' 00:34:29.261 18:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1509124 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.35 reactor_0 00:34:29.261 18:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:29.261 18:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:29.261 18:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:29.261 18:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:29.261 18:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:29.261 18:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:29.261 18:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:34:30.205 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:34:30.205 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:30.205 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1509124 -w 256 00:34:30.205 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1509124 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.64 reactor_0' 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1509124 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.64 reactor_0 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1509124 1 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1509124 1 busy 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1509124 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1509124 -w 256 00:34:30.466 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1509138 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.34 reactor_1' 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1509138 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.34 reactor_1 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:30.727 18:50:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1509494 00:34:40.727 Initializing NVMe Controllers 00:34:40.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:40.727 Controller IO queue size 256, less than required. 00:34:40.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:40.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:40.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:40.727 Initialization complete. Launching workers. 00:34:40.727 ======================================================== 00:34:40.727 Latency(us) 00:34:40.727 Device Information : IOPS MiB/s Average min max 00:34:40.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19243.80 75.17 13307.89 3905.92 36619.05 00:34:40.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20295.19 79.28 12615.97 8037.09 51911.22 00:34:40.727 ======================================================== 00:34:40.727 Total : 39538.99 154.45 12952.73 3905.92 51911.22 00:34:40.727 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1509124 0 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1509124 0 idle 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1509124 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1509124 -w 256 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1509124 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:20.34 reactor_0' 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1509124 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:20.34 reactor_0 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1509124 1 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1509124 1 idle 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1509124 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1509124 -w 256 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1509138 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1509138 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:40.727 18:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:40.727 18:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:40.727 18:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:34:40.727 18:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:40.727 18:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:34:40.727 18:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:34:42.640 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:42.640 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:42.640 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1509124 0 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1509124 0 idle 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1509124 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1509124 -w 256 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1509124 root 20 0 128.2g 79488 32256 S 6.7 0.1 0:20.73 reactor_0' 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1509124 root 20 0 128.2g 79488 32256 S 6.7 0.1 0:20.73 reactor_0 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1509124 1 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1509124 1 idle 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1509124 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1509124 -w 256 00:34:42.641 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1509138 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.16 reactor_1' 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1509138 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.16 reactor_1 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:42.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:42.902 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:42.902 rmmod nvme_tcp 00:34:42.902 rmmod nvme_fabrics 00:34:43.162 rmmod nvme_keyring 00:34:43.162 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:43.162 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:43.162 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:43.162 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1509124 ']' 00:34:43.162 18:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1509124 00:34:43.162 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1509124 ']' 00:34:43.162 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1509124 00:34:43.162 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:43.162 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:43.162 18:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1509124 00:34:43.162 18:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:43.162 18:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:43.162 18:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1509124' 00:34:43.162 killing process with pid 1509124 00:34:43.162 18:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1509124 00:34:43.162 18:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1509124 00:34:43.162 18:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:43.162 18:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:43.162 18:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:43.162 18:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:43.423 18:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:34:43.423 18:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:43.423 18:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:34:43.423 18:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:43.423 18:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:43.423 18:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.423 18:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:43.423 18:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.337 18:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.337 00:34:45.337 real 0m25.629s 00:34:45.337 user 0m40.460s 00:34:45.337 sys 0m9.971s 00:34:45.337 18:50:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:45.337 18:50:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:45.337 ************************************ 00:34:45.337 END TEST nvmf_interrupt 00:34:45.337 ************************************ 00:34:45.337 00:34:45.337 real 30m3.874s 00:34:45.337 user 61m2.275s 00:34:45.337 sys 10m19.043s 00:34:45.337 18:50:39 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:45.337 18:50:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.337 ************************************ 00:34:45.337 END TEST nvmf_tcp 00:34:45.337 ************************************ 00:34:45.337 18:50:39 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:34:45.337 18:50:39 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:45.337 18:50:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:45.337 18:50:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:45.337 18:50:39 -- common/autotest_common.sh@10 -- # set +x 00:34:45.599 ************************************ 00:34:45.599 START TEST spdkcli_nvmf_tcp 00:34:45.599 ************************************ 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:45.599 * Looking for test storage... 00:34:45.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:45.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.599 --rc genhtml_branch_coverage=1 00:34:45.599 --rc genhtml_function_coverage=1 00:34:45.599 --rc genhtml_legend=1 00:34:45.599 --rc geninfo_all_blocks=1 00:34:45.599 --rc geninfo_unexecuted_blocks=1 00:34:45.599 00:34:45.599 ' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:45.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.599 --rc genhtml_branch_coverage=1 00:34:45.599 --rc genhtml_function_coverage=1 00:34:45.599 --rc genhtml_legend=1 00:34:45.599 --rc geninfo_all_blocks=1 00:34:45.599 --rc geninfo_unexecuted_blocks=1 00:34:45.599 00:34:45.599 ' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:45.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.599 --rc genhtml_branch_coverage=1 00:34:45.599 --rc genhtml_function_coverage=1 00:34:45.599 --rc genhtml_legend=1 00:34:45.599 --rc geninfo_all_blocks=1 00:34:45.599 --rc geninfo_unexecuted_blocks=1 00:34:45.599 00:34:45.599 ' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:45.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.599 --rc genhtml_branch_coverage=1 00:34:45.599 --rc genhtml_function_coverage=1 00:34:45.599 --rc genhtml_legend=1 00:34:45.599 --rc geninfo_all_blocks=1 00:34:45.599 --rc geninfo_unexecuted_blocks=1 00:34:45.599 00:34:45.599 ' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:45.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:45.599 18:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1512681 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1512681 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1512681 ']' 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:45.861 18:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.861 [2024-10-08 18:50:39.727035] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:34:45.861 [2024-10-08 18:50:39.727107] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512681 ] 00:34:45.861 [2024-10-08 18:50:39.807957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:45.861 [2024-10-08 18:50:39.903603] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.861 [2024-10-08 18:50:39.903607] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.804 18:50:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:46.804 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:46.804 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:46.804 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:46.804 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:46.804 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:46.804 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:46.804 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:46.804 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:46.804 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:46.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:46.804 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:46.804 ' 00:34:50.108 [2024-10-08 18:50:43.409567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.049 [2024-10-08 18:50:44.769853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:53.594 [2024-10-08 18:50:47.300890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:55.506 [2024-10-08 18:50:49.523272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:57.417 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:57.417 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:57.417 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:57.417 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:57.417 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:57.417 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:57.417 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:57.417 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:57.417 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:57.417 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:57.417 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:57.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:57.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:57.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:57.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:57.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:57.418 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:57.418 18:50:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:57.418 18:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:57.418 18:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:57.418 18:50:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:57.418 18:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:57.418 18:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:57.418 18:50:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:57.418 18:50:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:57.990 18:50:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:57.990 18:50:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:57.990 18:50:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:57.990 18:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:57.990 18:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:57.990 18:50:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:57.990 18:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:57.990 18:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:57.990 18:50:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:57.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:57.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:57.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:57.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:57.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:57.990 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:57.990 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:57.990 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:57.990 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:57.990 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:57.990 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:57.990 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:57.990 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:57.990 ' 00:35:04.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:04.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:04.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:04.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:04.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:04.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:04.571 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:04.571 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:04.571 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:04.571 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:04.571 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:04.571 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:04.571 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:04.571 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1512681 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1512681 ']' 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1512681 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1512681 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1512681' 00:35:04.571 killing process with pid 1512681 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1512681 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1512681 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1512681 ']' 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1512681 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1512681 ']' 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1512681 00:35:04.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1512681) - No such process 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1512681 is not found' 00:35:04.571 Process with pid 1512681 is not found 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:04.571 00:35:04.571 real 0m18.297s 00:35:04.571 user 0m40.565s 00:35:04.571 sys 0m0.962s 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:04.571 18:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.571 ************************************ 00:35:04.571 END TEST spdkcli_nvmf_tcp 00:35:04.571 ************************************ 00:35:04.571 18:50:57 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:04.571 18:50:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:04.571 18:50:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:04.571 18:50:57 -- common/autotest_common.sh@10 -- # set +x 00:35:04.571 ************************************ 00:35:04.571 START TEST nvmf_identify_passthru 00:35:04.571 ************************************ 00:35:04.571 18:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:04.571 * Looking for test storage... 00:35:04.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:04.571 18:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:04.571 18:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:35:04.571 18:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:04.571 18:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:04.571 18:50:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:04.571 18:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:04.571 18:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:04.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.571 --rc genhtml_branch_coverage=1 00:35:04.571 --rc genhtml_function_coverage=1 00:35:04.571 --rc genhtml_legend=1 00:35:04.571 --rc geninfo_all_blocks=1 00:35:04.571 --rc geninfo_unexecuted_blocks=1 00:35:04.571 00:35:04.571 ' 00:35:04.571 18:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:04.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.571 --rc genhtml_branch_coverage=1 00:35:04.571 --rc genhtml_function_coverage=1 00:35:04.571 --rc genhtml_legend=1 00:35:04.571 --rc geninfo_all_blocks=1 00:35:04.571 --rc geninfo_unexecuted_blocks=1 00:35:04.571 00:35:04.571 ' 00:35:04.571 18:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:04.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.571 --rc genhtml_branch_coverage=1 00:35:04.571 --rc genhtml_function_coverage=1 00:35:04.571 --rc genhtml_legend=1 00:35:04.571 --rc geninfo_all_blocks=1 00:35:04.571 --rc geninfo_unexecuted_blocks=1 00:35:04.571 00:35:04.571 ' 00:35:04.571 18:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:04.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.571 --rc genhtml_branch_coverage=1 00:35:04.571 --rc genhtml_function_coverage=1 00:35:04.571 --rc genhtml_legend=1 00:35:04.571 --rc geninfo_all_blocks=1 00:35:04.571 --rc geninfo_unexecuted_blocks=1 00:35:04.571 00:35:04.571 ' 00:35:04.571 18:50:57 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.571 18:50:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.571 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:04.571 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:04.571 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.571 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.571 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.571 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.571 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.571 18:50:58 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:04.571 18:50:58 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.571 18:50:58 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.571 18:50:58 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.571 18:50:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.571 18:50:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.571 18:50:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.571 18:50:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:04.571 18:50:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.571 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:04.571 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:04.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:04.572 18:50:58 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.572 18:50:58 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:04.572 18:50:58 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.572 18:50:58 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.572 18:50:58 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.572 18:50:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.572 18:50:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.572 18:50:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.572 18:50:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:04.572 18:50:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.572 18:50:58 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.572 18:50:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.572 18:50:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:04.572 18:50:58 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:04.572 18:50:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:12.724 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.724 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:12.725 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:12.725 Found net devices under 0000:31:00.0: cvl_0_0 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:12.725 Found net devices under 0000:31:00.1: cvl_0_1 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:12.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:12.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:35:12.725 00:35:12.725 --- 10.0.0.2 ping statistics --- 00:35:12.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.725 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:12.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:12.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:35:12.725 00:35:12.725 --- 10.0.0.1 ping statistics --- 00:35:12.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.725 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:12.725 18:51:05 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:12.725 18:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.725 18:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:12.725 18:51:05 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:12.725 18:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:12.725 18:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:12.725 18:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:12.725 18:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:12.725 18:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:12.725 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:35:12.725 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:12.725 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:12.725 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:12.986 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:12.986 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:12.986 18:51:06 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:12.986 18:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.986 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:12.986 18:51:06 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:12.986 18:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.986 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1520261 00:35:12.986 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:12.986 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:12.986 18:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1520261 00:35:12.986 18:51:06 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1520261 ']' 00:35:12.986 18:51:06 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.987 18:51:06 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:12.987 18:51:06 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.987 18:51:06 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:12.987 18:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.987 [2024-10-08 18:51:06.907317] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:35:12.987 [2024-10-08 18:51:06.907383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.987 [2024-10-08 18:51:06.996550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:13.320 [2024-10-08 18:51:07.093628] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.320 [2024-10-08 18:51:07.093683] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.320 [2024-10-08 18:51:07.093695] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.320 [2024-10-08 18:51:07.093703] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.320 [2024-10-08 18:51:07.093709] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.320 [2024-10-08 18:51:07.095799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.320 [2024-10-08 18:51:07.095964] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:13.320 [2024-10-08 18:51:07.096040] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.320 [2024-10-08 18:51:07.096039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:35:13.966 18:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.966 INFO: Log level set to 20 00:35:13.966 INFO: Requests: 00:35:13.966 { 00:35:13.966 "jsonrpc": "2.0", 00:35:13.966 "method": "nvmf_set_config", 00:35:13.966 "id": 1, 00:35:13.966 "params": { 00:35:13.966 "admin_cmd_passthru": { 00:35:13.966 "identify_ctrlr": true 00:35:13.966 } 00:35:13.966 } 00:35:13.966 } 00:35:13.966 00:35:13.966 INFO: response: 00:35:13.966 { 00:35:13.966 "jsonrpc": "2.0", 00:35:13.966 "id": 1, 00:35:13.966 "result": true 00:35:13.966 } 00:35:13.966 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.966 18:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.966 INFO: Setting log level to 20 00:35:13.966 INFO: Setting log level to 20 00:35:13.966 INFO: Log level set to 20 00:35:13.966 INFO: Log level set to 20 00:35:13.966 INFO: Requests: 00:35:13.966 { 00:35:13.966 "jsonrpc": "2.0", 00:35:13.966 "method": "framework_start_init", 00:35:13.966 "id": 1 00:35:13.966 } 00:35:13.966 00:35:13.966 INFO: Requests: 00:35:13.966 { 00:35:13.966 "jsonrpc": "2.0", 00:35:13.966 "method": "framework_start_init", 00:35:13.966 "id": 1 00:35:13.966 } 00:35:13.966 00:35:13.966 [2024-10-08 18:51:07.845455] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:13.966 INFO: response: 00:35:13.966 { 00:35:13.966 "jsonrpc": "2.0", 00:35:13.966 "id": 1, 00:35:13.966 "result": true 00:35:13.966 } 00:35:13.966 00:35:13.966 INFO: response: 00:35:13.966 { 00:35:13.966 "jsonrpc": "2.0", 00:35:13.966 "id": 1, 00:35:13.966 "result": true 00:35:13.966 } 00:35:13.966 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.966 18:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.966 INFO: Setting log level to 40 00:35:13.966 INFO: Setting log level to 40 00:35:13.966 INFO: Setting log level to 40 00:35:13.966 [2024-10-08 18:51:07.859079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.966 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.966 18:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:13.967 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:13.967 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.967 18:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:13.967 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.967 18:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.227 Nvme0n1 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.227 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.227 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.227 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.227 [2024-10-08 18:51:08.252857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.227 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.227 [ 00:35:14.227 { 00:35:14.227 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:14.227 "subtype": "Discovery", 00:35:14.227 "listen_addresses": [], 00:35:14.227 "allow_any_host": true, 00:35:14.227 "hosts": [] 00:35:14.227 }, 00:35:14.227 { 00:35:14.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:14.227 "subtype": "NVMe", 00:35:14.227 "listen_addresses": [ 00:35:14.227 { 00:35:14.227 "trtype": "TCP", 00:35:14.227 "adrfam": "IPv4", 00:35:14.227 "traddr": "10.0.0.2", 00:35:14.227 "trsvcid": "4420" 00:35:14.227 } 00:35:14.227 ], 00:35:14.227 "allow_any_host": true, 00:35:14.227 "hosts": [], 00:35:14.227 "serial_number": "SPDK00000000000001", 00:35:14.227 "model_number": "SPDK bdev Controller", 00:35:14.227 "max_namespaces": 1, 00:35:14.227 "min_cntlid": 1, 00:35:14.227 "max_cntlid": 65519, 00:35:14.227 "namespaces": [ 00:35:14.227 { 00:35:14.227 "nsid": 1, 00:35:14.227 "bdev_name": "Nvme0n1", 00:35:14.227 "name": "Nvme0n1", 00:35:14.227 "nguid": "3634473052605494002538450000002B", 00:35:14.227 "uuid": "36344730-5260-5494-0025-38450000002b" 00:35:14.227 } 00:35:14.227 ] 00:35:14.227 } 00:35:14.227 ] 00:35:14.227 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.227 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:14.227 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:14.227 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:14.487 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:35:14.487 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:14.487 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:14.487 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:14.747 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:14.747 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:35:14.747 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:14.747 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.747 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:14.747 18:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:14.747 rmmod nvme_tcp 00:35:14.747 rmmod nvme_fabrics 00:35:14.747 rmmod nvme_keyring 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1520261 ']' 00:35:14.747 18:51:08 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1520261 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1520261 ']' 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1520261 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520261 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520261' 00:35:14.747 killing process with pid 1520261 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1520261 00:35:14.747 18:51:08 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1520261 00:35:15.007 18:51:09 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:15.007 18:51:09 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:15.007 18:51:09 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:15.007 18:51:09 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:15.007 18:51:09 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:35:15.007 18:51:09 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:15.007 18:51:09 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:35:15.267 18:51:09 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:15.267 18:51:09 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:15.267 18:51:09 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.267 18:51:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:15.267 18:51:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.177 18:51:11 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:17.177 00:35:17.177 real 0m13.355s 00:35:17.177 user 0m10.058s 00:35:17.177 sys 0m6.932s 00:35:17.177 18:51:11 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:17.177 18:51:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.177 ************************************ 00:35:17.177 END TEST nvmf_identify_passthru 00:35:17.177 ************************************ 00:35:17.177 18:51:11 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:17.177 18:51:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:17.177 18:51:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:17.177 18:51:11 -- common/autotest_common.sh@10 -- # set +x 00:35:17.177 ************************************ 00:35:17.177 START TEST nvmf_dif 00:35:17.178 ************************************ 00:35:17.178 18:51:11 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:17.438 * Looking for test storage... 00:35:17.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:17.438 18:51:11 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:17.438 18:51:11 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:35:17.438 18:51:11 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:17.438 18:51:11 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.438 18:51:11 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:17.439 18:51:11 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.439 18:51:11 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:17.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.439 --rc genhtml_branch_coverage=1 00:35:17.439 --rc genhtml_function_coverage=1 00:35:17.439 --rc genhtml_legend=1 00:35:17.439 --rc geninfo_all_blocks=1 00:35:17.439 --rc geninfo_unexecuted_blocks=1 00:35:17.439 00:35:17.439 ' 00:35:17.439 18:51:11 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:17.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.439 --rc genhtml_branch_coverage=1 00:35:17.439 --rc genhtml_function_coverage=1 00:35:17.439 --rc genhtml_legend=1 00:35:17.439 --rc geninfo_all_blocks=1 00:35:17.439 --rc geninfo_unexecuted_blocks=1 00:35:17.439 00:35:17.439 ' 00:35:17.439 18:51:11 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:17.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.439 --rc genhtml_branch_coverage=1 00:35:17.439 --rc genhtml_function_coverage=1 00:35:17.439 --rc genhtml_legend=1 00:35:17.439 --rc geninfo_all_blocks=1 00:35:17.439 --rc geninfo_unexecuted_blocks=1 00:35:17.439 00:35:17.439 ' 00:35:17.439 18:51:11 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:17.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.439 --rc genhtml_branch_coverage=1 00:35:17.439 --rc genhtml_function_coverage=1 00:35:17.439 --rc genhtml_legend=1 00:35:17.439 --rc geninfo_all_blocks=1 00:35:17.439 --rc geninfo_unexecuted_blocks=1 00:35:17.439 00:35:17.439 ' 00:35:17.439 18:51:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.439 18:51:11 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.439 18:51:11 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.439 18:51:11 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.439 18:51:11 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.439 18:51:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.439 18:51:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.439 18:51:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.439 18:51:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:17.439 18:51:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.439 18:51:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:17.439 18:51:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:17.439 18:51:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:17.439 18:51:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:17.439 18:51:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.439 18:51:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:17.439 18:51:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:17.439 18:51:11 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:17.439 18:51:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:25.574 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:25.574 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:25.574 Found net devices under 0000:31:00.0: cvl_0_0 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:25.574 Found net devices under 0000:31:00.1: cvl_0_1 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:25.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:35:25.574 00:35:25.574 --- 10.0.0.2 ping statistics --- 00:35:25.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.574 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:25.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:25.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:35:25.574 00:35:25.574 --- 10.0.0.1 ping statistics --- 00:35:25.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.574 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.574 18:51:18 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:35:25.575 18:51:18 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:25.575 18:51:18 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:28.132 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:28.132 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:28.133 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:28.133 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:28.711 18:51:22 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:28.711 18:51:22 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:28.711 18:51:22 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:28.711 18:51:22 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:28.711 18:51:22 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:28.711 18:51:22 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:28.711 18:51:22 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:28.711 18:51:22 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:28.711 18:51:22 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:28.711 18:51:22 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:28.711 18:51:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.711 18:51:22 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1526986 00:35:28.711 18:51:22 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1526986 00:35:28.711 18:51:22 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:28.711 18:51:22 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1526986 ']' 00:35:28.711 18:51:22 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.711 18:51:22 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:28.711 18:51:22 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.711 18:51:22 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:28.711 18:51:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.711 [2024-10-08 18:51:22.621923] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:35:28.711 [2024-10-08 18:51:22.621997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:28.711 [2024-10-08 18:51:22.712996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.972 [2024-10-08 18:51:22.806369] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:28.972 [2024-10-08 18:51:22.806430] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:28.972 [2024-10-08 18:51:22.806439] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:28.972 [2024-10-08 18:51:22.806446] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:28.972 [2024-10-08 18:51:22.806453] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:28.972 [2024-10-08 18:51:22.807296] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.544 18:51:23 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:29.544 18:51:23 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:35:29.544 18:51:23 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:29.544 18:51:23 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:29.544 18:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.544 18:51:23 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.544 18:51:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:29.544 18:51:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:29.544 18:51:23 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.544 18:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.544 [2024-10-08 18:51:23.452646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.544 18:51:23 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.544 18:51:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:29.544 18:51:23 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:29.544 18:51:23 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:29.544 18:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.544 ************************************ 00:35:29.544 START TEST fio_dif_1_default 00:35:29.544 ************************************ 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.544 bdev_null0 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.544 [2024-10-08 18:51:23.537007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:29.544 { 00:35:29.544 "params": { 00:35:29.544 "name": "Nvme$subsystem", 00:35:29.544 "trtype": "$TEST_TRANSPORT", 00:35:29.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.544 "adrfam": "ipv4", 00:35:29.544 "trsvcid": "$NVMF_PORT", 00:35:29.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.544 "hdgst": ${hdgst:-false}, 00:35:29.544 "ddgst": ${ddgst:-false} 00:35:29.544 }, 00:35:29.544 "method": "bdev_nvme_attach_controller" 00:35:29.544 } 00:35:29.544 EOF 00:35:29.544 )") 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:29.544 "params": { 00:35:29.544 "name": "Nvme0", 00:35:29.544 "trtype": "tcp", 00:35:29.544 "traddr": "10.0.0.2", 00:35:29.544 "adrfam": "ipv4", 00:35:29.544 "trsvcid": "4420", 00:35:29.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:29.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:29.544 "hdgst": false, 00:35:29.544 "ddgst": false 00:35:29.544 }, 00:35:29.544 "method": "bdev_nvme_attach_controller" 00:35:29.544 }' 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:29.544 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:29.830 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:29.830 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:29.830 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:29.830 18:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.093 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:30.094 fio-3.35 00:35:30.094 Starting 1 thread 00:35:42.323 00:35:42.323 filename0: (groupid=0, jobs=1): err= 0: pid=1527513: Tue Oct 8 18:51:34 2024 00:35:42.323 read: IOPS=192, BW=769KiB/s (787kB/s)(7712KiB/10035msec) 00:35:42.323 slat (nsec): min=5440, max=66049, avg=6278.39, stdev=1984.20 00:35:42.323 clat (usec): min=515, max=43908, avg=20802.35, stdev=20269.81 00:35:42.323 lat (usec): min=520, max=43949, avg=20808.62, stdev=20269.78 00:35:42.323 clat percentiles (usec): 00:35:42.323 | 1.00th=[ 562], 5.00th=[ 709], 10.00th=[ 734], 20.00th=[ 766], 00:35:42.323 | 30.00th=[ 799], 40.00th=[ 873], 50.00th=[ 979], 60.00th=[41157], 00:35:42.323 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:42.323 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:35:42.323 | 99.99th=[43779] 00:35:42.323 bw ( KiB/s): min= 672, max= 896, per=100.00%, avg=769.60, stdev=45.82, samples=20 00:35:42.323 iops : min= 168, max= 224, avg=192.40, stdev=11.45, samples=20 00:35:42.323 lat (usec) : 750=14.11%, 1000=36.41% 00:35:42.323 lat (msec) : 2=0.10%, 50=49.38% 00:35:42.323 cpu : usr=93.26%, sys=6.51%, ctx=12, majf=0, minf=247 00:35:42.323 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.323 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.323 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:42.323 00:35:42.323 Run status group 0 (all jobs): 00:35:42.323 READ: bw=769KiB/s (787kB/s), 769KiB/s-769KiB/s (787kB/s-787kB/s), io=7712KiB (7897kB), run=10035-10035msec 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.323 00:35:42.323 real 0m11.195s 00:35:42.323 user 0m25.083s 00:35:42.323 sys 0m1.036s 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:42.323 18:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.323 ************************************ 00:35:42.323 END TEST fio_dif_1_default 00:35:42.323 ************************************ 00:35:42.323 18:51:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:42.323 18:51:34 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:42.323 18:51:34 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:42.323 18:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.324 ************************************ 00:35:42.324 START TEST fio_dif_1_multi_subsystems 00:35:42.324 ************************************ 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.324 bdev_null0 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.324 [2024-10-08 18:51:34.816442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.324 bdev_null1 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:42.324 { 00:35:42.324 "params": { 00:35:42.324 "name": "Nvme$subsystem", 00:35:42.324 "trtype": "$TEST_TRANSPORT", 00:35:42.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.324 "adrfam": "ipv4", 00:35:42.324 "trsvcid": "$NVMF_PORT", 00:35:42.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.324 "hdgst": ${hdgst:-false}, 00:35:42.324 "ddgst": ${ddgst:-false} 00:35:42.324 }, 00:35:42.324 "method": "bdev_nvme_attach_controller" 00:35:42.324 } 00:35:42.324 EOF 00:35:42.324 )") 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:42.324 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:42.324 { 00:35:42.324 "params": { 00:35:42.324 "name": "Nvme$subsystem", 00:35:42.324 "trtype": "$TEST_TRANSPORT", 00:35:42.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.324 "adrfam": "ipv4", 00:35:42.324 "trsvcid": "$NVMF_PORT", 00:35:42.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.340 "hdgst": ${hdgst:-false}, 00:35:42.340 "ddgst": ${ddgst:-false} 00:35:42.340 }, 00:35:42.340 "method": "bdev_nvme_attach_controller" 00:35:42.340 } 00:35:42.340 EOF 00:35:42.340 )") 00:35:42.340 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:42.340 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.340 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:42.340 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:35:42.340 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:35:42.340 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:42.340 "params": { 00:35:42.340 "name": "Nvme0", 00:35:42.340 "trtype": "tcp", 00:35:42.340 "traddr": "10.0.0.2", 00:35:42.340 "adrfam": "ipv4", 00:35:42.340 "trsvcid": "4420", 00:35:42.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.340 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.340 "hdgst": false, 00:35:42.340 "ddgst": false 00:35:42.340 }, 00:35:42.340 "method": "bdev_nvme_attach_controller" 00:35:42.340 },{ 00:35:42.340 "params": { 00:35:42.340 "name": "Nvme1", 00:35:42.340 "trtype": "tcp", 00:35:42.340 "traddr": "10.0.0.2", 00:35:42.340 "adrfam": "ipv4", 00:35:42.340 "trsvcid": "4420", 00:35:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.341 "hdgst": false, 00:35:42.341 "ddgst": false 00:35:42.341 }, 00:35:42.341 "method": "bdev_nvme_attach_controller" 00:35:42.341 }' 00:35:42.341 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:42.341 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:42.341 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.341 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.341 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:42.341 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:42.341 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:42.341 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:42.341 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:42.341 18:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.341 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:42.341 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:42.341 fio-3.35 00:35:42.341 Starting 2 threads 00:35:52.338 00:35:52.338 filename0: (groupid=0, jobs=1): err= 0: pid=1529712: Tue Oct 8 18:51:45 2024 00:35:52.338 read: IOPS=190, BW=760KiB/s (778kB/s)(7632KiB/10042msec) 00:35:52.338 slat (nsec): min=5429, max=31873, avg=6478.25, stdev=2207.57 00:35:52.338 clat (usec): min=456, max=42188, avg=21033.05, stdev=20171.10 00:35:52.338 lat (usec): min=461, max=42216, avg=21039.52, stdev=20170.94 00:35:52.338 clat percentiles (usec): 00:35:52.338 | 1.00th=[ 570], 5.00th=[ 775], 10.00th=[ 791], 20.00th=[ 816], 00:35:52.338 | 30.00th=[ 832], 40.00th=[ 857], 50.00th=[40633], 60.00th=[41157], 00:35:52.338 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:52.338 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:52.338 | 99.99th=[42206] 00:35:52.338 bw ( KiB/s): min= 704, max= 768, per=50.17%, avg=761.60, stdev=19.70, samples=20 00:35:52.338 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:35:52.338 lat (usec) : 500=0.21%, 750=2.20%, 1000=46.44% 00:35:52.338 lat (msec) : 2=1.05%, 50=50.10% 00:35:52.338 cpu : usr=95.48%, sys=4.30%, ctx=14, majf=0, minf=178 00:35:52.338 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.338 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.338 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:52.338 filename1: (groupid=0, jobs=1): err= 0: pid=1529713: Tue Oct 8 18:51:45 2024 00:35:52.338 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:35:52.338 slat (nsec): min=5421, max=31345, avg=6322.96, stdev=2199.39 00:35:52.338 clat (usec): min=484, max=42290, avg=21038.18, stdev=20151.40 00:35:52.338 lat (usec): min=490, max=42319, avg=21044.50, stdev=20151.38 00:35:52.338 clat percentiles (usec): 00:35:52.338 | 1.00th=[ 594], 5.00th=[ 799], 10.00th=[ 816], 20.00th=[ 840], 00:35:52.338 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:35:52.338 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:52.338 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:52.338 | 99.99th=[42206] 00:35:52.338 bw ( KiB/s): min= 704, max= 768, per=50.17%, avg=761.26, stdev=20.18, samples=19 00:35:52.338 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:35:52.338 lat (usec) : 500=0.16%, 750=2.05%, 1000=46.21% 00:35:52.338 lat (msec) : 2=1.47%, 50=50.11% 00:35:52.338 cpu : usr=95.53%, sys=4.26%, ctx=13, majf=0, minf=96 00:35:52.338 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.339 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.339 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:52.339 00:35:52.339 Run status group 0 (all jobs): 00:35:52.339 READ: bw=1517KiB/s (1553kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=14.9MiB (15.6MB), run=10002-10042msec 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.339 00:35:52.339 real 0m11.367s 00:35:52.339 user 0m31.592s 00:35:52.339 sys 0m1.248s 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:52.339 18:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:52.339 ************************************ 00:35:52.339 END TEST fio_dif_1_multi_subsystems 00:35:52.339 ************************************ 00:35:52.339 18:51:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:52.339 18:51:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:52.339 18:51:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:52.339 18:51:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:52.339 ************************************ 00:35:52.339 START TEST fio_dif_rand_params 00:35:52.339 ************************************ 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.339 bdev_null0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.339 [2024-10-08 18:51:46.265289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:52.339 { 00:35:52.339 "params": { 00:35:52.339 "name": "Nvme$subsystem", 00:35:52.339 "trtype": "$TEST_TRANSPORT", 00:35:52.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.339 "adrfam": "ipv4", 00:35:52.339 "trsvcid": "$NVMF_PORT", 00:35:52.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.339 "hdgst": ${hdgst:-false}, 00:35:52.339 "ddgst": ${ddgst:-false} 00:35:52.339 }, 00:35:52.339 "method": "bdev_nvme_attach_controller" 00:35:52.339 } 00:35:52.339 EOF 00:35:52.339 )") 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:52.339 "params": { 00:35:52.339 "name": "Nvme0", 00:35:52.339 "trtype": "tcp", 00:35:52.339 "traddr": "10.0.0.2", 00:35:52.339 "adrfam": "ipv4", 00:35:52.339 "trsvcid": "4420", 00:35:52.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:52.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:52.339 "hdgst": false, 00:35:52.339 "ddgst": false 00:35:52.339 }, 00:35:52.339 "method": "bdev_nvme_attach_controller" 00:35:52.339 }' 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:52.339 18:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.915 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:52.915 ... 00:35:52.915 fio-3.35 00:35:52.915 Starting 3 threads 00:35:59.504 00:35:59.504 filename0: (groupid=0, jobs=1): err= 0: pid=1532040: Tue Oct 8 18:51:52 2024 00:35:59.504 read: IOPS=315, BW=39.5MiB/s (41.4MB/s)(199MiB/5044msec) 00:35:59.504 slat (nsec): min=5498, max=32188, avg=8158.57, stdev=2014.62 00:35:59.504 clat (usec): min=5456, max=89746, avg=9461.89, stdev=6223.09 00:35:59.504 lat (usec): min=5464, max=89759, avg=9470.05, stdev=6223.31 00:35:59.504 clat percentiles (usec): 00:35:59.504 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7701], 00:35:59.505 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:35:59.505 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10814], 00:35:59.505 | 99.00th=[48497], 99.50th=[51643], 99.90th=[88605], 99.95th=[89654], 00:35:59.505 | 99.99th=[89654] 00:35:59.505 bw ( KiB/s): min=21760, max=48128, per=33.46%, avg=40729.60, stdev=7247.26, samples=10 00:35:59.505 iops : min= 170, max= 376, avg=318.20, stdev=56.62, samples=10 00:35:59.505 lat (msec) : 10=83.80%, 20=14.63%, 50=0.94%, 100=0.63% 00:35:59.505 cpu : usr=94.61%, sys=5.16%, ctx=7, majf=0, minf=140 00:35:59.505 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.505 issued rwts: total=1593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.505 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:59.505 filename0: (groupid=0, jobs=1): err= 0: pid=1532041: Tue Oct 8 18:51:52 2024 00:35:59.505 read: IOPS=319, BW=39.9MiB/s (41.8MB/s)(201MiB/5046msec) 00:35:59.505 slat (nsec): min=8041, max=32123, avg=8802.52, stdev=1111.32 00:35:59.505 clat (usec): min=4380, max=89017, avg=9358.84, stdev=6915.55 00:35:59.505 lat (usec): min=4389, max=89025, avg=9367.64, stdev=6915.68 00:35:59.505 clat percentiles (usec): 00:35:59.505 | 1.00th=[ 4817], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7242], 00:35:59.505 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8356], 60.00th=[ 8717], 00:35:59.505 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10290], 00:35:59.505 | 99.00th=[47973], 99.50th=[48497], 99.90th=[51119], 99.95th=[88605], 00:35:59.505 | 99.99th=[88605] 00:35:59.505 bw ( KiB/s): min=25088, max=46592, per=33.84%, avg=41190.40, stdev=6315.19, samples=10 00:35:59.505 iops : min= 196, max= 364, avg=321.80, stdev=49.34, samples=10 00:35:59.505 lat (msec) : 10=92.18%, 20=4.97%, 50=2.67%, 100=0.19% 00:35:59.505 cpu : usr=94.41%, sys=5.33%, ctx=11, majf=0, minf=36 00:35:59.505 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.505 issued rwts: total=1611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.505 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:59.505 filename0: (groupid=0, jobs=1): err= 0: pid=1532042: Tue Oct 8 18:51:52 2024 00:35:59.505 read: IOPS=315, BW=39.5MiB/s (41.4MB/s)(199MiB/5045msec) 00:35:59.505 slat (nsec): min=5472, max=31815, avg=8181.05, stdev=1955.18 00:35:59.505 clat (usec): min=4665, max=88246, avg=9457.67, stdev=5852.14 00:35:59.505 lat (usec): min=4674, max=88254, avg=9465.85, stdev=5852.07 00:35:59.505 clat percentiles (usec): 00:35:59.505 | 1.00th=[ 5473], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7570], 00:35:59.505 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:35:59.505 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[10945], 00:35:59.505 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51119], 99.95th=[88605], 00:35:59.505 | 99.99th=[88605] 00:35:59.505 bw ( KiB/s): min=33024, max=47104, per=33.48%, avg=40747.70, stdev=4092.44, samples=10 00:35:59.505 iops : min= 258, max= 368, avg=318.30, stdev=32.01, samples=10 00:35:59.505 lat (msec) : 10=82.69%, 20=15.37%, 50=1.63%, 100=0.31% 00:35:59.505 cpu : usr=94.47%, sys=5.29%, ctx=8, majf=0, minf=106 00:35:59.505 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.505 issued rwts: total=1594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.505 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:59.505 00:35:59.505 Run status group 0 (all jobs): 00:35:59.505 READ: bw=119MiB/s (125MB/s), 39.5MiB/s-39.9MiB/s (41.4MB/s-41.8MB/s), io=600MiB (629MB), run=5044-5046msec 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 bdev_null0 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 [2024-10-08 18:51:52.463382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 bdev_null1 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 bdev_null2 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:59.505 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:59.506 { 00:35:59.506 "params": { 00:35:59.506 "name": "Nvme$subsystem", 00:35:59.506 "trtype": "$TEST_TRANSPORT", 00:35:59.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.506 "adrfam": "ipv4", 00:35:59.506 "trsvcid": "$NVMF_PORT", 00:35:59.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.506 "hdgst": ${hdgst:-false}, 00:35:59.506 "ddgst": ${ddgst:-false} 00:35:59.506 }, 00:35:59.506 "method": "bdev_nvme_attach_controller" 00:35:59.506 } 00:35:59.506 EOF 00:35:59.506 )") 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:59.506 { 00:35:59.506 "params": { 00:35:59.506 "name": "Nvme$subsystem", 00:35:59.506 "trtype": "$TEST_TRANSPORT", 00:35:59.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.506 "adrfam": "ipv4", 00:35:59.506 "trsvcid": "$NVMF_PORT", 00:35:59.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.506 "hdgst": ${hdgst:-false}, 00:35:59.506 "ddgst": ${ddgst:-false} 00:35:59.506 }, 00:35:59.506 "method": "bdev_nvme_attach_controller" 00:35:59.506 } 00:35:59.506 EOF 00:35:59.506 )") 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:59.506 { 00:35:59.506 "params": { 00:35:59.506 "name": "Nvme$subsystem", 00:35:59.506 "trtype": "$TEST_TRANSPORT", 00:35:59.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.506 "adrfam": "ipv4", 00:35:59.506 "trsvcid": "$NVMF_PORT", 00:35:59.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.506 "hdgst": ${hdgst:-false}, 00:35:59.506 "ddgst": ${ddgst:-false} 00:35:59.506 }, 00:35:59.506 "method": "bdev_nvme_attach_controller" 00:35:59.506 } 00:35:59.506 EOF 00:35:59.506 )") 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:59.506 "params": { 00:35:59.506 "name": "Nvme0", 00:35:59.506 "trtype": "tcp", 00:35:59.506 "traddr": "10.0.0.2", 00:35:59.506 "adrfam": "ipv4", 00:35:59.506 "trsvcid": "4420", 00:35:59.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:59.506 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:59.506 "hdgst": false, 00:35:59.506 "ddgst": false 00:35:59.506 }, 00:35:59.506 "method": "bdev_nvme_attach_controller" 00:35:59.506 },{ 00:35:59.506 "params": { 00:35:59.506 "name": "Nvme1", 00:35:59.506 "trtype": "tcp", 00:35:59.506 "traddr": "10.0.0.2", 00:35:59.506 "adrfam": "ipv4", 00:35:59.506 "trsvcid": "4420", 00:35:59.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:59.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:59.506 "hdgst": false, 00:35:59.506 "ddgst": false 00:35:59.506 }, 00:35:59.506 "method": "bdev_nvme_attach_controller" 00:35:59.506 },{ 00:35:59.506 "params": { 00:35:59.506 "name": "Nvme2", 00:35:59.506 "trtype": "tcp", 00:35:59.506 "traddr": "10.0.0.2", 00:35:59.506 "adrfam": "ipv4", 00:35:59.506 "trsvcid": "4420", 00:35:59.506 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:59.506 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:59.506 "hdgst": false, 00:35:59.506 "ddgst": false 00:35:59.506 }, 00:35:59.506 "method": "bdev_nvme_attach_controller" 00:35:59.506 }' 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:59.506 18:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.506 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:59.506 ... 00:35:59.506 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:59.506 ... 00:35:59.506 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:59.506 ... 00:35:59.506 fio-3.35 00:35:59.506 Starting 24 threads 00:36:11.748 00:36:11.748 filename0: (groupid=0, jobs=1): err= 0: pid=1533412: Tue Oct 8 18:52:03 2024 00:36:11.748 read: IOPS=741, BW=2967KiB/s (3039kB/s)(29.0MiB/10017msec) 00:36:11.748 slat (nsec): min=5432, max=69266, avg=7522.35, stdev=4011.99 00:36:11.748 clat (usec): min=8512, max=26027, avg=21500.30, stdev=3582.93 00:36:11.748 lat (usec): min=8520, max=26035, avg=21507.82, stdev=3583.13 00:36:11.748 clat percentiles (usec): 00:36:11.748 | 1.00th=[12387], 5.00th=[14222], 10.00th=[15008], 20.00th=[17433], 00:36:11.748 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:11.748 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[24773], 00:36:11.748 | 99.00th=[25560], 99.50th=[25560], 99.90th=[26084], 99.95th=[26084], 00:36:11.748 | 99.99th=[26084] 00:36:11.748 bw ( KiB/s): min= 2688, max= 3712, per=4.45%, avg=2966.00, stdev=271.89, samples=20 00:36:11.748 iops : min= 672, max= 928, avg=741.50, stdev=67.97, samples=20 00:36:11.748 lat (msec) : 10=0.30%, 20=24.80%, 50=74.90% 00:36:11.748 cpu : usr=99.07%, sys=0.61%, ctx=12, majf=0, minf=9 00:36:11.748 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:11.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 issued rwts: total=7431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.748 filename0: (groupid=0, jobs=1): err= 0: pid=1533413: Tue Oct 8 18:52:03 2024 00:36:11.748 read: IOPS=678, BW=2713KiB/s (2778kB/s)(26.5MiB/10004msec) 00:36:11.748 slat (usec): min=5, max=110, avg=18.19, stdev=13.52 00:36:11.748 clat (usec): min=11542, max=31322, avg=23440.95, stdev=1165.88 00:36:11.748 lat (usec): min=11548, max=31330, avg=23459.14, stdev=1164.81 00:36:11.748 clat percentiles (usec): 00:36:11.748 | 1.00th=[21365], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:36:11.748 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:36:11.748 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:36:11.748 | 99.00th=[25560], 99.50th=[26084], 99.90th=[31327], 99.95th=[31327], 00:36:11.748 | 99.99th=[31327] 00:36:11.748 bw ( KiB/s): min= 2688, max= 2816, per=4.06%, avg=2708.21, stdev=47.95, samples=19 00:36:11.748 iops : min= 672, max= 704, avg=677.05, stdev=11.99, samples=19 00:36:11.748 lat (msec) : 20=0.71%, 50=99.29% 00:36:11.748 cpu : usr=98.99%, sys=0.70%, ctx=14, majf=0, minf=9 00:36:11.748 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:11.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.748 filename0: (groupid=0, jobs=1): err= 0: pid=1533414: Tue Oct 8 18:52:03 2024 00:36:11.748 read: IOPS=682, BW=2729KiB/s (2794kB/s)(26.7MiB/10004msec) 00:36:11.748 slat (usec): min=5, max=107, avg=18.35, stdev=14.40 00:36:11.748 clat (usec): min=6221, max=56949, avg=23325.83, stdev=3503.92 00:36:11.748 lat (usec): min=6232, max=56974, avg=23344.18, stdev=3505.43 00:36:11.748 clat percentiles (usec): 00:36:11.748 | 1.00th=[14615], 5.00th=[16319], 10.00th=[19530], 20.00th=[22676], 00:36:11.748 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:36:11.748 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25560], 95.00th=[28705], 00:36:11.748 | 99.00th=[34341], 99.50th=[35390], 99.90th=[46400], 99.95th=[56886], 00:36:11.748 | 99.99th=[56886] 00:36:11.748 bw ( KiB/s): min= 2480, max= 3152, per=4.09%, avg=2725.05, stdev=149.03, samples=19 00:36:11.748 iops : min= 620, max= 788, avg=681.26, stdev=37.26, samples=19 00:36:11.748 lat (msec) : 10=0.23%, 20=10.81%, 50=88.88%, 100=0.07% 00:36:11.748 cpu : usr=98.97%, sys=0.72%, ctx=14, majf=0, minf=9 00:36:11.748 IO depths : 1=2.5%, 2=5.2%, 4=12.4%, 8=68.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:36:11.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 complete : 0=0.0%, 4=90.0%, 8=6.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 issued rwts: total=6824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.748 filename0: (groupid=0, jobs=1): err= 0: pid=1533415: Tue Oct 8 18:52:03 2024 00:36:11.748 read: IOPS=708, BW=2833KiB/s (2900kB/s)(27.7MiB/10018msec) 00:36:11.748 slat (nsec): min=5438, max=88963, avg=14275.42, stdev=11966.44 00:36:11.748 clat (usec): min=11366, max=39048, avg=22470.84, stdev=3624.68 00:36:11.748 lat (usec): min=11372, max=39056, avg=22485.12, stdev=3626.87 00:36:11.748 clat percentiles (usec): 00:36:11.748 | 1.00th=[13566], 5.00th=[15139], 10.00th=[16909], 20.00th=[20317], 00:36:11.748 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:11.748 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[27657], 00:36:11.748 | 99.00th=[33424], 99.50th=[34341], 99.90th=[38536], 99.95th=[39060], 00:36:11.748 | 99.99th=[39060] 00:36:11.748 bw ( KiB/s): min= 2656, max= 3104, per=4.25%, avg=2833.60, stdev=135.26, samples=20 00:36:11.748 iops : min= 664, max= 776, avg=708.40, stdev=33.81, samples=20 00:36:11.748 lat (msec) : 20=19.51%, 50=80.49% 00:36:11.748 cpu : usr=99.04%, sys=0.65%, ctx=14, majf=0, minf=9 00:36:11.748 IO depths : 1=3.4%, 2=7.0%, 4=16.2%, 8=63.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:36:11.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 issued rwts: total=7094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.748 filename0: (groupid=0, jobs=1): err= 0: pid=1533416: Tue Oct 8 18:52:03 2024 00:36:11.748 read: IOPS=689, BW=2756KiB/s (2823kB/s)(27.0MiB/10016msec) 00:36:11.748 slat (usec): min=5, max=110, avg=17.23, stdev=12.78 00:36:11.748 clat (usec): min=11194, max=40261, avg=23080.69, stdev=2343.21 00:36:11.748 lat (usec): min=11200, max=40267, avg=23097.92, stdev=2344.31 00:36:11.748 clat percentiles (usec): 00:36:11.748 | 1.00th=[13960], 5.00th=[18220], 10.00th=[21890], 20.00th=[22676], 00:36:11.748 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:36:11.748 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:36:11.748 | 99.00th=[29754], 99.50th=[33162], 99.90th=[38011], 99.95th=[40109], 00:36:11.748 | 99.99th=[40109] 00:36:11.748 bw ( KiB/s): min= 2666, max= 3232, per=4.13%, avg=2753.30, stdev=134.36, samples=20 00:36:11.748 iops : min= 666, max= 808, avg=688.30, stdev=33.61, samples=20 00:36:11.748 lat (msec) : 20=7.19%, 50=92.81% 00:36:11.748 cpu : usr=98.79%, sys=0.89%, ctx=64, majf=0, minf=9 00:36:11.748 IO depths : 1=5.1%, 2=10.3%, 4=21.7%, 8=55.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:11.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 issued rwts: total=6902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.748 filename0: (groupid=0, jobs=1): err= 0: pid=1533417: Tue Oct 8 18:52:03 2024 00:36:11.748 read: IOPS=689, BW=2758KiB/s (2824kB/s)(26.9MiB/10003msec) 00:36:11.748 slat (nsec): min=5434, max=96132, avg=13199.63, stdev=11246.95 00:36:11.748 clat (usec): min=3200, max=45907, avg=23138.25, stdev=3764.49 00:36:11.748 lat (usec): min=3206, max=45927, avg=23151.45, stdev=3765.41 00:36:11.748 clat percentiles (usec): 00:36:11.748 | 1.00th=[14484], 5.00th=[16450], 10.00th=[18220], 20.00th=[20841], 00:36:11.748 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23462], 60.00th=[23725], 00:36:11.748 | 70.00th=[23987], 80.00th=[24773], 90.00th=[27132], 95.00th=[29492], 00:36:11.748 | 99.00th=[33817], 99.50th=[37487], 99.90th=[45876], 99.95th=[45876], 00:36:11.748 | 99.99th=[45876] 00:36:11.748 bw ( KiB/s): min= 2560, max= 2928, per=4.12%, avg=2746.95, stdev=92.08, samples=19 00:36:11.748 iops : min= 640, max= 732, avg=686.74, stdev=23.02, samples=19 00:36:11.748 lat (msec) : 4=0.06%, 10=0.32%, 20=15.97%, 50=83.66% 00:36:11.748 cpu : usr=98.79%, sys=0.90%, ctx=14, majf=0, minf=9 00:36:11.748 IO depths : 1=0.8%, 2=1.6%, 4=5.9%, 8=77.3%, 16=14.5%, 32=0.0%, >=64=0.0% 00:36:11.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 complete : 0=0.0%, 4=89.5%, 8=7.6%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.748 issued rwts: total=6896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.749 filename0: (groupid=0, jobs=1): err= 0: pid=1533418: Tue Oct 8 18:52:03 2024 00:36:11.749 read: IOPS=684, BW=2739KiB/s (2805kB/s)(26.8MiB/10017msec) 00:36:11.749 slat (usec): min=5, max=103, avg=12.69, stdev= 9.63 00:36:11.749 clat (usec): min=5278, max=38256, avg=23259.87, stdev=1938.24 00:36:11.749 lat (usec): min=5287, max=38263, avg=23272.56, stdev=1938.45 00:36:11.749 clat percentiles (usec): 00:36:11.749 | 1.00th=[13829], 5.00th=[22152], 10.00th=[22414], 20.00th=[22676], 00:36:11.749 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:36:11.749 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:36:11.749 | 99.00th=[26608], 99.50th=[28181], 99.90th=[36439], 99.95th=[38011], 00:36:11.749 | 99.99th=[38011] 00:36:11.749 bw ( KiB/s): min= 2560, max= 3040, per=4.10%, avg=2737.60, stdev=108.13, samples=20 00:36:11.749 iops : min= 640, max= 760, avg=684.40, stdev=27.03, samples=20 00:36:11.749 lat (msec) : 10=0.23%, 20=3.35%, 50=96.41% 00:36:11.749 cpu : usr=98.88%, sys=0.81%, ctx=13, majf=0, minf=9 00:36:11.749 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:11.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 issued rwts: total=6860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.749 filename0: (groupid=0, jobs=1): err= 0: pid=1533419: Tue Oct 8 18:52:03 2024 00:36:11.749 read: IOPS=699, BW=2797KiB/s (2864kB/s)(27.3MiB/10011msec) 00:36:11.749 slat (usec): min=5, max=125, avg=18.86, stdev=16.56 00:36:11.749 clat (usec): min=9898, max=39826, avg=22728.41, stdev=3294.48 00:36:11.749 lat (usec): min=9907, max=39832, avg=22747.27, stdev=3297.16 00:36:11.749 clat percentiles (usec): 00:36:11.749 | 1.00th=[13698], 5.00th=[15926], 10.00th=[17695], 20.00th=[22152], 00:36:11.749 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:11.749 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25560], 00:36:11.749 | 99.00th=[34341], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:36:11.749 | 99.99th=[39584] 00:36:11.749 bw ( KiB/s): min= 2576, max= 3424, per=4.19%, avg=2792.42, stdev=193.72, samples=19 00:36:11.749 iops : min= 644, max= 856, avg=698.11, stdev=48.43, samples=19 00:36:11.749 lat (msec) : 10=0.14%, 20=14.01%, 50=85.84% 00:36:11.749 cpu : usr=99.00%, sys=0.69%, ctx=16, majf=0, minf=9 00:36:11.749 IO depths : 1=2.6%, 2=7.4%, 4=20.2%, 8=59.6%, 16=10.3%, 32=0.0%, >=64=0.0% 00:36:11.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 complete : 0=0.0%, 4=93.0%, 8=1.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 issued rwts: total=7000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.749 filename1: (groupid=0, jobs=1): err= 0: pid=1533420: Tue Oct 8 18:52:03 2024 00:36:11.749 read: IOPS=687, BW=2750KiB/s (2816kB/s)(26.9MiB/10003msec) 00:36:11.749 slat (usec): min=5, max=133, avg=23.80, stdev=20.58 00:36:11.749 clat (usec): min=3482, max=45379, avg=23068.27, stdev=3579.79 00:36:11.749 lat (usec): min=3488, max=45400, avg=23092.07, stdev=3581.04 00:36:11.749 clat percentiles (usec): 00:36:11.749 | 1.00th=[13173], 5.00th=[16188], 10.00th=[19268], 20.00th=[22414], 00:36:11.749 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:11.749 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25035], 95.00th=[27919], 00:36:11.749 | 99.00th=[36439], 99.50th=[38011], 99.90th=[45351], 99.95th=[45351], 00:36:11.749 | 99.99th=[45351] 00:36:11.749 bw ( KiB/s): min= 2452, max= 2880, per=4.11%, avg=2743.79, stdev=107.70, samples=19 00:36:11.749 iops : min= 613, max= 720, avg=685.95, stdev=26.92, samples=19 00:36:11.749 lat (msec) : 4=0.10%, 10=0.10%, 20=11.43%, 50=88.37% 00:36:11.749 cpu : usr=98.44%, sys=1.01%, ctx=81, majf=0, minf=9 00:36:11.749 IO depths : 1=1.9%, 2=6.4%, 4=19.0%, 8=61.5%, 16=11.3%, 32=0.0%, >=64=0.0% 00:36:11.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 complete : 0=0.0%, 4=92.8%, 8=2.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 issued rwts: total=6877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.749 filename1: (groupid=0, jobs=1): err= 0: pid=1533421: Tue Oct 8 18:52:03 2024 00:36:11.749 read: IOPS=706, BW=2828KiB/s (2896kB/s)(27.6MiB/10006msec) 00:36:11.749 slat (usec): min=5, max=108, avg=21.71, stdev=17.37 00:36:11.749 clat (usec): min=5164, max=39395, avg=22453.06, stdev=3947.78 00:36:11.749 lat (usec): min=5173, max=39415, avg=22474.77, stdev=3950.32 00:36:11.749 clat percentiles (usec): 00:36:11.749 | 1.00th=[13042], 5.00th=[14877], 10.00th=[16450], 20.00th=[20317], 00:36:11.749 | 30.00th=[22414], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:11.749 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[27919], 00:36:11.749 | 99.00th=[36439], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:36:11.749 | 99.99th=[39584] 00:36:11.749 bw ( KiB/s): min= 2512, max= 3232, per=4.24%, avg=2830.32, stdev=185.59, samples=19 00:36:11.749 iops : min= 628, max= 808, avg=707.58, stdev=46.40, samples=19 00:36:11.749 lat (msec) : 10=0.14%, 20=19.03%, 50=80.83% 00:36:11.749 cpu : usr=98.19%, sys=1.12%, ctx=169, majf=0, minf=9 00:36:11.749 IO depths : 1=4.2%, 2=8.4%, 4=18.9%, 8=59.9%, 16=8.7%, 32=0.0%, >=64=0.0% 00:36:11.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 complete : 0=0.0%, 4=92.6%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 issued rwts: total=7074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.749 filename1: (groupid=0, jobs=1): err= 0: pid=1533422: Tue Oct 8 18:52:03 2024 00:36:11.749 read: IOPS=700, BW=2800KiB/s (2868kB/s)(27.4MiB/10016msec) 00:36:11.749 slat (nsec): min=5274, max=95832, avg=18637.12, stdev=14161.50 00:36:11.749 clat (usec): min=11213, max=40414, avg=22697.64, stdev=3034.01 00:36:11.749 lat (usec): min=11219, max=40441, avg=22716.28, stdev=3036.19 00:36:11.749 clat percentiles (usec): 00:36:11.749 | 1.00th=[13698], 5.00th=[16188], 10.00th=[17695], 20.00th=[22414], 00:36:11.749 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:11.749 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:36:11.749 | 99.00th=[32375], 99.50th=[35390], 99.90th=[39060], 99.95th=[40633], 00:36:11.749 | 99.99th=[40633] 00:36:11.749 bw ( KiB/s): min= 2688, max= 3056, per=4.20%, avg=2798.40, stdev=124.25, samples=20 00:36:11.749 iops : min= 672, max= 764, avg=699.60, stdev=31.06, samples=20 00:36:11.749 lat (msec) : 20=12.96%, 50=87.04% 00:36:11.749 cpu : usr=98.80%, sys=0.88%, ctx=15, majf=0, minf=9 00:36:11.749 IO depths : 1=4.5%, 2=9.0%, 4=19.7%, 8=58.5%, 16=8.3%, 32=0.0%, >=64=0.0% 00:36:11.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 complete : 0=0.0%, 4=92.7%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 issued rwts: total=7012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.749 filename1: (groupid=0, jobs=1): err= 0: pid=1533423: Tue Oct 8 18:52:03 2024 00:36:11.749 read: IOPS=694, BW=2778KiB/s (2845kB/s)(27.1MiB/10002msec) 00:36:11.749 slat (usec): min=5, max=121, avg=18.30, stdev=16.40 00:36:11.749 clat (usec): min=5523, max=45220, avg=22955.48, stdev=3803.16 00:36:11.749 lat (usec): min=5532, max=45248, avg=22973.78, stdev=3804.21 00:36:11.749 clat percentiles (usec): 00:36:11.749 | 1.00th=[13698], 5.00th=[15664], 10.00th=[17695], 20.00th=[21627], 00:36:11.749 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:11.749 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25560], 95.00th=[29230], 00:36:11.749 | 99.00th=[35914], 99.50th=[36963], 99.90th=[45351], 99.95th=[45351], 00:36:11.749 | 99.99th=[45351] 00:36:11.749 bw ( KiB/s): min= 2608, max= 3104, per=4.16%, avg=2774.16, stdev=133.69, samples=19 00:36:11.749 iops : min= 652, max= 776, avg=693.53, stdev=33.44, samples=19 00:36:11.749 lat (msec) : 10=0.20%, 20=15.43%, 50=84.37% 00:36:11.749 cpu : usr=99.01%, sys=0.67%, ctx=15, majf=0, minf=9 00:36:11.749 IO depths : 1=0.1%, 2=0.7%, 4=4.8%, 8=78.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:36:11.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 complete : 0=0.0%, 4=89.8%, 8=8.0%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.749 issued rwts: total=6946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.749 filename1: (groupid=0, jobs=1): err= 0: pid=1533424: Tue Oct 8 18:52:03 2024 00:36:11.749 read: IOPS=698, BW=2794KiB/s (2861kB/s)(27.3MiB/10006msec) 00:36:11.749 slat (usec): min=5, max=116, avg=19.30, stdev=16.78 00:36:11.749 clat (usec): min=6598, max=41743, avg=22738.70, stdev=3089.34 00:36:11.749 lat (usec): min=6612, max=41757, avg=22757.99, stdev=3091.05 00:36:11.750 clat percentiles (usec): 00:36:11.750 | 1.00th=[13304], 5.00th=[15664], 10.00th=[18744], 20.00th=[22414], 00:36:11.750 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:11.750 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:36:11.750 | 99.00th=[34341], 99.50th=[38536], 99.90th=[40109], 99.95th=[41681], 00:36:11.750 | 99.99th=[41681] 00:36:11.750 bw ( KiB/s): min= 2688, max= 3072, per=4.19%, avg=2794.11, stdev=133.25, samples=19 00:36:11.750 iops : min= 672, max= 768, avg=698.53, stdev=33.31, samples=19 00:36:11.750 lat (msec) : 10=0.23%, 20=10.65%, 50=89.12% 00:36:11.750 cpu : usr=98.77%, sys=0.85%, ctx=54, majf=0, minf=9 00:36:11.750 IO depths : 1=5.3%, 2=10.6%, 4=22.2%, 8=54.7%, 16=7.2%, 32=0.0%, >=64=0.0% 00:36:11.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 issued rwts: total=6988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.750 filename1: (groupid=0, jobs=1): err= 0: pid=1533425: Tue Oct 8 18:52:03 2024 00:36:11.750 read: IOPS=690, BW=2762KiB/s (2828kB/s)(27.0MiB/10010msec) 00:36:11.750 slat (usec): min=5, max=106, avg=15.69, stdev=14.01 00:36:11.750 clat (usec): min=9638, max=39525, avg=23071.54, stdev=3233.95 00:36:11.750 lat (usec): min=9646, max=39531, avg=23087.23, stdev=3234.78 00:36:11.750 clat percentiles (usec): 00:36:11.750 | 1.00th=[13566], 5.00th=[16909], 10.00th=[19006], 20.00th=[22414], 00:36:11.750 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:36:11.750 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25297], 95.00th=[28181], 00:36:11.750 | 99.00th=[33424], 99.50th=[36439], 99.90th=[38536], 99.95th=[39584], 00:36:11.750 | 99.99th=[39584] 00:36:11.750 bw ( KiB/s): min= 2661, max= 2880, per=4.14%, avg=2762.37, stdev=63.60, samples=19 00:36:11.750 iops : min= 665, max= 720, avg=690.58, stdev=15.92, samples=19 00:36:11.750 lat (msec) : 10=0.06%, 20=12.79%, 50=87.15% 00:36:11.750 cpu : usr=98.84%, sys=0.84%, ctx=14, majf=0, minf=9 00:36:11.750 IO depths : 1=0.8%, 2=2.7%, 4=9.7%, 8=72.8%, 16=13.9%, 32=0.0%, >=64=0.0% 00:36:11.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 complete : 0=0.0%, 4=90.7%, 8=5.9%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 issued rwts: total=6912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.750 filename1: (groupid=0, jobs=1): err= 0: pid=1533426: Tue Oct 8 18:52:03 2024 00:36:11.750 read: IOPS=691, BW=2768KiB/s (2834kB/s)(27.0MiB/10004msec) 00:36:11.750 slat (usec): min=5, max=135, avg=20.25, stdev=17.34 00:36:11.750 clat (usec): min=3815, max=46342, avg=22981.76, stdev=3344.28 00:36:11.750 lat (usec): min=3823, max=46360, avg=23002.01, stdev=3345.70 00:36:11.750 clat percentiles (usec): 00:36:11.750 | 1.00th=[13960], 5.00th=[16188], 10.00th=[18744], 20.00th=[22414], 00:36:11.750 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:11.750 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25035], 95.00th=[27919], 00:36:11.750 | 99.00th=[31851], 99.50th=[35390], 99.90th=[46400], 99.95th=[46400], 00:36:11.750 | 99.99th=[46400] 00:36:11.750 bw ( KiB/s): min= 2560, max= 2976, per=4.14%, avg=2762.11, stdev=99.52, samples=19 00:36:11.750 iops : min= 640, max= 744, avg=690.53, stdev=24.88, samples=19 00:36:11.750 lat (msec) : 4=0.09%, 10=0.20%, 20=13.15%, 50=86.56% 00:36:11.750 cpu : usr=98.79%, sys=0.83%, ctx=40, majf=0, minf=9 00:36:11.750 IO depths : 1=2.6%, 2=5.3%, 4=12.2%, 8=68.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:36:11.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 complete : 0=0.0%, 4=91.0%, 8=5.2%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 issued rwts: total=6922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.750 filename1: (groupid=0, jobs=1): err= 0: pid=1533427: Tue Oct 8 18:52:03 2024 00:36:11.750 read: IOPS=692, BW=2771KiB/s (2837kB/s)(27.1MiB/10016msec) 00:36:11.750 slat (usec): min=5, max=119, avg=17.45, stdev=14.42 00:36:11.750 clat (usec): min=11274, max=45197, avg=22952.56, stdev=3184.32 00:36:11.750 lat (usec): min=11281, max=45203, avg=22970.02, stdev=3186.15 00:36:11.750 clat percentiles (usec): 00:36:11.750 | 1.00th=[14091], 5.00th=[16319], 10.00th=[19006], 20.00th=[22414], 00:36:11.750 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:11.750 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[26346], 00:36:11.750 | 99.00th=[33162], 99.50th=[37487], 99.90th=[39060], 99.95th=[45351], 00:36:11.750 | 99.99th=[45351] 00:36:11.750 bw ( KiB/s): min= 2688, max= 3136, per=4.15%, avg=2768.80, stdev=121.24, samples=20 00:36:11.750 iops : min= 672, max= 784, avg=692.20, stdev=30.31, samples=20 00:36:11.750 lat (msec) : 20=12.51%, 50=87.49% 00:36:11.750 cpu : usr=99.08%, sys=0.60%, ctx=19, majf=0, minf=9 00:36:11.750 IO depths : 1=4.6%, 2=9.4%, 4=20.2%, 8=57.7%, 16=8.1%, 32=0.0%, >=64=0.0% 00:36:11.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 issued rwts: total=6938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.750 filename2: (groupid=0, jobs=1): err= 0: pid=1533428: Tue Oct 8 18:52:03 2024 00:36:11.750 read: IOPS=728, BW=2915KiB/s (2985kB/s)(28.5MiB/10010msec) 00:36:11.750 slat (usec): min=5, max=133, avg=17.20, stdev=15.20 00:36:11.750 clat (usec): min=9337, max=38514, avg=21811.95, stdev=4107.29 00:36:11.750 lat (usec): min=9347, max=38531, avg=21829.15, stdev=4110.66 00:36:11.750 clat percentiles (usec): 00:36:11.750 | 1.00th=[12911], 5.00th=[14746], 10.00th=[15533], 20.00th=[17433], 00:36:11.750 | 30.00th=[21365], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:11.750 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24511], 95.00th=[27657], 00:36:11.750 | 99.00th=[34341], 99.50th=[35390], 99.90th=[38011], 99.95th=[38536], 00:36:11.750 | 99.99th=[38536] 00:36:11.750 bw ( KiB/s): min= 2480, max= 3328, per=4.38%, avg=2922.95, stdev=253.38, samples=19 00:36:11.750 iops : min= 620, max= 832, avg=730.74, stdev=63.34, samples=19 00:36:11.750 lat (msec) : 10=0.08%, 20=26.80%, 50=73.11% 00:36:11.750 cpu : usr=98.88%, sys=0.80%, ctx=16, majf=0, minf=9 00:36:11.750 IO depths : 1=3.4%, 2=6.7%, 4=16.0%, 8=64.4%, 16=9.5%, 32=0.0%, >=64=0.0% 00:36:11.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 complete : 0=0.0%, 4=91.6%, 8=3.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 issued rwts: total=7294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.750 filename2: (groupid=0, jobs=1): err= 0: pid=1533429: Tue Oct 8 18:52:03 2024 00:36:11.750 read: IOPS=685, BW=2743KiB/s (2809kB/s)(26.8MiB/10013msec) 00:36:11.750 slat (usec): min=5, max=103, avg=17.18, stdev=14.57 00:36:11.750 clat (usec): min=6822, max=38427, avg=23194.01, stdev=2764.36 00:36:11.750 lat (usec): min=6846, max=38433, avg=23211.19, stdev=2765.42 00:36:11.750 clat percentiles (usec): 00:36:11.750 | 1.00th=[14091], 5.00th=[16909], 10.00th=[21890], 20.00th=[22676], 00:36:11.750 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:36:11.750 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[25560], 00:36:11.750 | 99.00th=[33817], 99.50th=[34341], 99.90th=[37487], 99.95th=[37487], 00:36:11.750 | 99.99th=[38536] 00:36:11.750 bw ( KiB/s): min= 2560, max= 3136, per=4.11%, avg=2740.00, stdev=131.72, samples=20 00:36:11.750 iops : min= 640, max= 784, avg=685.00, stdev=32.93, samples=20 00:36:11.750 lat (msec) : 10=0.23%, 20=7.70%, 50=92.06% 00:36:11.750 cpu : usr=98.96%, sys=0.72%, ctx=16, majf=0, minf=9 00:36:11.750 IO depths : 1=5.3%, 2=10.7%, 4=22.2%, 8=54.4%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:11.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 issued rwts: total=6866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.750 filename2: (groupid=0, jobs=1): err= 0: pid=1533430: Tue Oct 8 18:52:03 2024 00:36:11.750 read: IOPS=697, BW=2788KiB/s (2855kB/s)(27.2MiB/10007msec) 00:36:11.750 slat (usec): min=5, max=105, avg=15.09, stdev=12.74 00:36:11.750 clat (usec): min=7785, max=42624, avg=22848.71, stdev=3722.19 00:36:11.750 lat (usec): min=7797, max=42641, avg=22863.80, stdev=3723.77 00:36:11.750 clat percentiles (usec): 00:36:11.750 | 1.00th=[13435], 5.00th=[15533], 10.00th=[17171], 20.00th=[20841], 00:36:11.750 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:11.750 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25822], 95.00th=[28967], 00:36:11.750 | 99.00th=[33817], 99.50th=[36963], 99.90th=[42730], 99.95th=[42730], 00:36:11.750 | 99.99th=[42730] 00:36:11.750 bw ( KiB/s): min= 2560, max= 2976, per=4.18%, avg=2784.84, stdev=108.84, samples=19 00:36:11.750 iops : min= 640, max= 744, avg=696.21, stdev=27.21, samples=19 00:36:11.750 lat (msec) : 10=0.14%, 20=17.35%, 50=82.51% 00:36:11.750 cpu : usr=99.01%, sys=0.68%, ctx=14, majf=0, minf=11 00:36:11.750 IO depths : 1=2.1%, 2=4.2%, 4=10.6%, 8=70.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:36:11.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 complete : 0=0.0%, 4=90.5%, 8=5.6%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.750 issued rwts: total=6976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.750 filename2: (groupid=0, jobs=1): err= 0: pid=1533431: Tue Oct 8 18:52:03 2024 00:36:11.750 read: IOPS=695, BW=2781KiB/s (2848kB/s)(27.2MiB/10017msec) 00:36:11.751 slat (usec): min=5, max=112, avg=13.00, stdev=10.71 00:36:11.751 clat (usec): min=6623, max=38534, avg=22911.73, stdev=2754.66 00:36:11.751 lat (usec): min=6642, max=38542, avg=22924.73, stdev=2755.10 00:36:11.751 clat percentiles (usec): 00:36:11.751 | 1.00th=[13960], 5.00th=[16712], 10.00th=[20579], 20.00th=[22676], 00:36:11.751 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:36:11.751 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:36:11.751 | 99.00th=[29492], 99.50th=[36439], 99.90th=[38536], 99.95th=[38536], 00:36:11.751 | 99.99th=[38536] 00:36:11.751 bw ( KiB/s): min= 2560, max= 2992, per=4.17%, avg=2779.20, stdev=113.98, samples=20 00:36:11.751 iops : min= 640, max= 748, avg=694.80, stdev=28.49, samples=20 00:36:11.751 lat (msec) : 10=0.23%, 20=8.87%, 50=90.90% 00:36:11.751 cpu : usr=98.95%, sys=0.72%, ctx=15, majf=0, minf=9 00:36:11.751 IO depths : 1=5.4%, 2=10.9%, 4=22.7%, 8=53.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:11.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.751 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.751 issued rwts: total=6964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.751 filename2: (groupid=0, jobs=1): err= 0: pid=1533432: Tue Oct 8 18:52:03 2024 00:36:11.751 read: IOPS=697, BW=2790KiB/s (2857kB/s)(27.3MiB/10007msec) 00:36:11.751 slat (usec): min=5, max=115, avg=16.18, stdev=12.43 00:36:11.751 clat (usec): min=11056, max=39205, avg=22819.00, stdev=3963.57 00:36:11.751 lat (usec): min=11062, max=39228, avg=22835.18, stdev=3965.85 00:36:11.751 clat percentiles (usec): 00:36:11.751 | 1.00th=[13960], 5.00th=[15139], 10.00th=[16712], 20.00th=[20579], 00:36:11.751 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:11.751 | 70.00th=[23725], 80.00th=[24249], 90.00th=[26346], 95.00th=[30278], 00:36:11.751 | 99.00th=[34341], 99.50th=[35390], 99.90th=[38536], 99.95th=[39060], 00:36:11.751 | 99.99th=[39060] 00:36:11.751 bw ( KiB/s): min= 2560, max= 3072, per=4.17%, avg=2783.16, stdev=152.39, samples=19 00:36:11.751 iops : min= 640, max= 768, avg=695.79, stdev=38.10, samples=19 00:36:11.751 lat (msec) : 20=18.95%, 50=81.05% 00:36:11.751 cpu : usr=98.79%, sys=0.90%, ctx=14, majf=0, minf=9 00:36:11.751 IO depths : 1=2.9%, 2=5.8%, 4=14.4%, 8=66.5%, 16=10.4%, 32=0.0%, >=64=0.0% 00:36:11.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.751 complete : 0=0.0%, 4=91.2%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.751 issued rwts: total=6980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.751 filename2: (groupid=0, jobs=1): err= 0: pid=1533433: Tue Oct 8 18:52:03 2024 00:36:11.751 read: IOPS=680, BW=2724KiB/s (2789kB/s)(26.6MiB/10016msec) 00:36:11.751 slat (usec): min=5, max=130, avg=21.43, stdev=17.72 00:36:11.751 clat (usec): min=13757, max=36750, avg=23304.73, stdev=1702.31 00:36:11.751 lat (usec): min=13766, max=36779, avg=23326.16, stdev=1702.69 00:36:11.751 clat percentiles (usec): 00:36:11.751 | 1.00th=[16057], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:11.751 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:36:11.751 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:11.751 | 99.00th=[28967], 99.50th=[30278], 99.90th=[35914], 99.95th=[36439], 00:36:11.751 | 99.99th=[36963] 00:36:11.751 bw ( KiB/s): min= 2688, max= 2864, per=4.08%, avg=2721.30, stdev=60.62, samples=20 00:36:11.751 iops : min= 672, max= 716, avg=680.30, stdev=15.11, samples=20 00:36:11.751 lat (msec) : 20=3.65%, 50=96.35% 00:36:11.751 cpu : usr=98.84%, sys=0.85%, ctx=13, majf=0, minf=9 00:36:11.751 IO depths : 1=5.6%, 2=11.6%, 4=23.6%, 8=52.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:11.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.751 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.751 issued rwts: total=6820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.751 filename2: (groupid=0, jobs=1): err= 0: pid=1533434: Tue Oct 8 18:52:03 2024 00:36:11.751 read: IOPS=678, BW=2716KiB/s (2781kB/s)(26.5MiB/10003msec) 00:36:11.751 slat (usec): min=5, max=111, avg=20.38, stdev=15.15 00:36:11.751 clat (usec): min=2969, max=45676, avg=23397.46, stdev=2810.43 00:36:11.751 lat (usec): min=2978, max=45697, avg=23417.84, stdev=2811.22 00:36:11.751 clat percentiles (usec): 00:36:11.751 | 1.00th=[14877], 5.00th=[19530], 10.00th=[22414], 20.00th=[22676], 00:36:11.751 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:36:11.751 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[26084], 00:36:11.751 | 99.00th=[32113], 99.50th=[35914], 99.90th=[45876], 99.95th=[45876], 00:36:11.751 | 99.99th=[45876] 00:36:11.751 bw ( KiB/s): min= 2560, max= 2912, per=4.06%, avg=2704.68, stdev=90.56, samples=19 00:36:11.751 iops : min= 640, max= 728, avg=676.16, stdev=22.66, samples=19 00:36:11.751 lat (msec) : 4=0.24%, 10=0.13%, 20=5.05%, 50=94.58% 00:36:11.751 cpu : usr=99.04%, sys=0.65%, ctx=17, majf=0, minf=9 00:36:11.751 IO depths : 1=4.3%, 2=8.9%, 4=20.0%, 8=57.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:11.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.751 complete : 0=0.0%, 4=92.6%, 8=2.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.751 issued rwts: total=6791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.751 filename2: (groupid=0, jobs=1): err= 0: pid=1533435: Tue Oct 8 18:52:03 2024 00:36:11.751 read: IOPS=684, BW=2739KiB/s (2805kB/s)(26.8MiB/10010msec) 00:36:11.751 slat (usec): min=5, max=114, avg=17.17, stdev=12.27 00:36:11.751 clat (usec): min=9690, max=39073, avg=23220.14, stdev=2518.67 00:36:11.751 lat (usec): min=9696, max=39088, avg=23237.31, stdev=2520.01 00:36:11.751 clat percentiles (usec): 00:36:11.751 | 1.00th=[14615], 5.00th=[18220], 10.00th=[22152], 20.00th=[22676], 00:36:11.751 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:36:11.751 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[25560], 00:36:11.751 | 99.00th=[31851], 99.50th=[34866], 99.90th=[38011], 99.95th=[39060], 00:36:11.751 | 99.99th=[39060] 00:36:11.751 bw ( KiB/s): min= 2608, max= 2944, per=4.10%, avg=2733.74, stdev=82.30, samples=19 00:36:11.751 iops : min= 652, max= 736, avg=683.42, stdev=20.58, samples=19 00:36:11.751 lat (msec) : 10=0.23%, 20=6.33%, 50=93.43% 00:36:11.751 cpu : usr=98.73%, sys=0.96%, ctx=14, majf=0, minf=9 00:36:11.751 IO depths : 1=4.5%, 2=9.5%, 4=21.5%, 8=56.3%, 16=8.2%, 32=0.0%, >=64=0.0% 00:36:11.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.751 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.751 issued rwts: total=6854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:11.751 00:36:11.751 Run status group 0 (all jobs): 00:36:11.751 READ: bw=65.1MiB/s (68.3MB/s), 2713KiB/s-2967KiB/s (2778kB/s-3039kB/s), io=652MiB (684MB), run=10002-10018msec 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.751 18:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.751 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.752 bdev_null0 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.752 [2024-10-08 18:52:04.092670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.752 bdev_null1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:11.752 { 00:36:11.752 "params": { 00:36:11.752 "name": "Nvme$subsystem", 00:36:11.752 "trtype": "$TEST_TRANSPORT", 00:36:11.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.752 "adrfam": "ipv4", 00:36:11.752 "trsvcid": "$NVMF_PORT", 00:36:11.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.752 "hdgst": ${hdgst:-false}, 00:36:11.752 "ddgst": ${ddgst:-false} 00:36:11.752 }, 00:36:11.752 "method": "bdev_nvme_attach_controller" 00:36:11.752 } 00:36:11.752 EOF 00:36:11.752 )") 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:11.752 { 00:36:11.752 "params": { 00:36:11.752 "name": "Nvme$subsystem", 00:36:11.752 "trtype": "$TEST_TRANSPORT", 00:36:11.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.752 "adrfam": "ipv4", 00:36:11.752 "trsvcid": "$NVMF_PORT", 00:36:11.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.752 "hdgst": ${hdgst:-false}, 00:36:11.752 "ddgst": ${ddgst:-false} 00:36:11.752 }, 00:36:11.752 "method": "bdev_nvme_attach_controller" 00:36:11.752 } 00:36:11.752 EOF 00:36:11.752 )") 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:11.752 18:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:11.752 "params": { 00:36:11.752 "name": "Nvme0", 00:36:11.752 "trtype": "tcp", 00:36:11.752 "traddr": "10.0.0.2", 00:36:11.752 "adrfam": "ipv4", 00:36:11.752 "trsvcid": "4420", 00:36:11.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:11.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:11.753 "hdgst": false, 00:36:11.753 "ddgst": false 00:36:11.753 }, 00:36:11.753 "method": "bdev_nvme_attach_controller" 00:36:11.753 },{ 00:36:11.753 "params": { 00:36:11.753 "name": "Nvme1", 00:36:11.753 "trtype": "tcp", 00:36:11.753 "traddr": "10.0.0.2", 00:36:11.753 "adrfam": "ipv4", 00:36:11.753 "trsvcid": "4420", 00:36:11.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:11.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:11.753 "hdgst": false, 00:36:11.753 "ddgst": false 00:36:11.753 }, 00:36:11.753 "method": "bdev_nvme_attach_controller" 00:36:11.753 }' 00:36:11.753 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:11.753 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:11.753 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.753 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.753 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:11.753 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:11.753 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:11.753 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:11.753 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:11.753 18:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.753 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:11.753 ... 00:36:11.753 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:11.753 ... 00:36:11.753 fio-3.35 00:36:11.753 Starting 4 threads 00:36:17.142 00:36:17.142 filename0: (groupid=0, jobs=1): err= 0: pid=1535774: Tue Oct 8 18:52:10 2024 00:36:17.142 read: IOPS=2952, BW=23.1MiB/s (24.2MB/s)(115MiB/5001msec) 00:36:17.142 slat (usec): min=5, max=107, avg= 6.78, stdev= 3.02 00:36:17.142 clat (usec): min=1390, max=45436, avg=2692.67, stdev=1021.43 00:36:17.142 lat (usec): min=1396, max=45470, avg=2699.46, stdev=1021.63 00:36:17.142 clat percentiles (usec): 00:36:17.142 | 1.00th=[ 2008], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2638], 00:36:17.142 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:17.142 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2933], 00:36:17.142 | 99.00th=[ 3752], 99.50th=[ 4015], 99.90th=[ 4621], 99.95th=[45351], 00:36:17.142 | 99.99th=[45351] 00:36:17.142 bw ( KiB/s): min=21680, max=24368, per=24.92%, avg=23624.89, stdev=759.84, samples=9 00:36:17.142 iops : min= 2710, max= 3046, avg=2953.11, stdev=94.98, samples=9 00:36:17.142 lat (msec) : 2=0.94%, 4=98.48%, 10=0.52%, 50=0.05% 00:36:17.142 cpu : usr=96.02%, sys=3.72%, ctx=10, majf=0, minf=111 00:36:17.142 IO depths : 1=0.1%, 2=0.2%, 4=69.1%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.142 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.142 issued rwts: total=14765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.142 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:17.142 filename0: (groupid=0, jobs=1): err= 0: pid=1535775: Tue Oct 8 18:52:10 2024 00:36:17.142 read: IOPS=2959, BW=23.1MiB/s (24.2MB/s)(116MiB/5002msec) 00:36:17.142 slat (nsec): min=5459, max=36891, avg=6344.61, stdev=2036.03 00:36:17.142 clat (usec): min=1077, max=43020, avg=2686.36, stdev=958.94 00:36:17.142 lat (usec): min=1082, max=43052, avg=2692.71, stdev=959.12 00:36:17.142 clat percentiles (usec): 00:36:17.142 | 1.00th=[ 1991], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2638], 00:36:17.142 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:17.142 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2933], 00:36:17.142 | 99.00th=[ 3425], 99.50th=[ 3720], 99.90th=[ 4178], 99.95th=[42730], 00:36:17.142 | 99.99th=[43254] 00:36:17.142 bw ( KiB/s): min=21968, max=24016, per=24.95%, avg=23655.11, stdev=643.89, samples=9 00:36:17.143 iops : min= 2746, max= 3002, avg=2956.89, stdev=80.49, samples=9 00:36:17.143 lat (msec) : 2=1.07%, 4=98.72%, 10=0.16%, 50=0.05% 00:36:17.143 cpu : usr=96.20%, sys=3.54%, ctx=8, majf=0, minf=71 00:36:17.143 IO depths : 1=0.1%, 2=0.2%, 4=69.9%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.143 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.143 issued rwts: total=14803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:17.143 filename1: (groupid=0, jobs=1): err= 0: pid=1535776: Tue Oct 8 18:52:10 2024 00:36:17.143 read: IOPS=2972, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:36:17.143 slat (usec): min=5, max=124, avg= 7.79, stdev= 3.08 00:36:17.143 clat (usec): min=1141, max=4881, avg=2669.96, stdev=248.87 00:36:17.143 lat (usec): min=1151, max=4898, avg=2677.75, stdev=248.64 00:36:17.143 clat percentiles (usec): 00:36:17.143 | 1.00th=[ 1778], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2606], 00:36:17.143 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:17.143 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2835], 95.00th=[ 2966], 00:36:17.143 | 99.00th=[ 3720], 99.50th=[ 3982], 99.90th=[ 4424], 99.95th=[ 4490], 00:36:17.143 | 99.99th=[ 4883] 00:36:17.143 bw ( KiB/s): min=23520, max=24544, per=25.09%, avg=23781.33, stdev=301.99, samples=9 00:36:17.143 iops : min= 2940, max= 3068, avg=2972.67, stdev=37.75, samples=9 00:36:17.143 lat (msec) : 2=1.84%, 4=97.67%, 10=0.48% 00:36:17.143 cpu : usr=96.24%, sys=3.48%, ctx=8, majf=0, minf=90 00:36:17.143 IO depths : 1=0.1%, 2=0.3%, 4=73.1%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.143 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.143 issued rwts: total=14865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:17.143 filename1: (groupid=0, jobs=1): err= 0: pid=1535778: Tue Oct 8 18:52:10 2024 00:36:17.143 read: IOPS=2966, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:36:17.143 slat (usec): min=5, max=218, avg= 7.91, stdev= 3.56 00:36:17.143 clat (usec): min=779, max=4974, avg=2675.51, stdev=226.68 00:36:17.143 lat (usec): min=790, max=4980, avg=2683.42, stdev=226.37 00:36:17.143 clat percentiles (usec): 00:36:17.143 | 1.00th=[ 1942], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2638], 00:36:17.143 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:17.143 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2933], 00:36:17.143 | 99.00th=[ 3589], 99.50th=[ 3884], 99.90th=[ 4424], 99.95th=[ 4490], 00:36:17.143 | 99.99th=[ 4948] 00:36:17.143 bw ( KiB/s): min=23488, max=24304, per=25.03%, avg=23731.56, stdev=261.05, samples=9 00:36:17.143 iops : min= 2936, max= 3038, avg=2966.44, stdev=32.63, samples=9 00:36:17.143 lat (usec) : 1000=0.03% 00:36:17.143 lat (msec) : 2=1.21%, 4=98.42%, 10=0.34% 00:36:17.143 cpu : usr=96.42%, sys=3.32%, ctx=8, majf=0, minf=102 00:36:17.143 IO depths : 1=0.1%, 2=0.1%, 4=71.9%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.143 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.143 issued rwts: total=14836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:17.143 00:36:17.143 Run status group 0 (all jobs): 00:36:17.143 READ: bw=92.6MiB/s (97.1MB/s), 23.1MiB/s-23.2MiB/s (24.2MB/s-24.3MB/s), io=463MiB (486MB), run=5001-5002msec 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.143 00:36:17.143 real 0m24.234s 00:36:17.143 user 5m13.254s 00:36:17.143 sys 0m4.534s 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:17.143 18:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 ************************************ 00:36:17.143 END TEST fio_dif_rand_params 00:36:17.143 ************************************ 00:36:17.143 18:52:10 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:17.143 18:52:10 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:17.143 18:52:10 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:17.143 18:52:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 ************************************ 00:36:17.143 START TEST fio_dif_digest 00:36:17.143 ************************************ 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 bdev_null0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 [2024-10-08 18:52:10.582101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:17.143 { 00:36:17.143 "params": { 00:36:17.143 "name": "Nvme$subsystem", 00:36:17.143 "trtype": "$TEST_TRANSPORT", 00:36:17.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.143 "adrfam": "ipv4", 00:36:17.143 "trsvcid": "$NVMF_PORT", 00:36:17.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.143 "hdgst": ${hdgst:-false}, 00:36:17.143 "ddgst": ${ddgst:-false} 00:36:17.143 }, 00:36:17.143 "method": "bdev_nvme_attach_controller" 00:36:17.143 } 00:36:17.143 EOF 00:36:17.143 )") 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:17.143 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:17.144 "params": { 00:36:17.144 "name": "Nvme0", 00:36:17.144 "trtype": "tcp", 00:36:17.144 "traddr": "10.0.0.2", 00:36:17.144 "adrfam": "ipv4", 00:36:17.144 "trsvcid": "4420", 00:36:17.144 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.144 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.144 "hdgst": true, 00:36:17.144 "ddgst": true 00:36:17.144 }, 00:36:17.144 "method": "bdev_nvme_attach_controller" 00:36:17.144 }' 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:17.144 18:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.144 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:17.144 ... 00:36:17.144 fio-3.35 00:36:17.144 Starting 3 threads 00:36:29.378 00:36:29.379 filename0: (groupid=0, jobs=1): err= 0: pid=1537130: Tue Oct 8 18:52:21 2024 00:36:29.379 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(269MiB/10027msec) 00:36:29.379 slat (nsec): min=5679, max=37312, avg=6550.24, stdev=1203.41 00:36:29.379 clat (msec): min=6, max=132, avg=13.98, stdev=13.48 00:36:29.379 lat (msec): min=6, max=132, avg=13.99, stdev=13.48 00:36:29.379 clat percentiles (msec): 00:36:29.379 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:36:29.379 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:36:29.379 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 48], 95.00th=[ 51], 00:36:29.379 | 99.00th=[ 53], 99.50th=[ 90], 99.90th=[ 93], 99.95th=[ 94], 00:36:29.379 | 99.99th=[ 133] 00:36:29.379 bw ( KiB/s): min=19968, max=35840, per=24.58%, avg=27481.60, stdev=4325.59, samples=20 00:36:29.379 iops : min= 156, max= 280, avg=214.70, stdev=33.79, samples=20 00:36:29.379 lat (msec) : 10=61.86%, 20=28.00%, 50=3.95%, 100=6.14%, 250=0.05% 00:36:29.379 cpu : usr=94.28%, sys=5.46%, ctx=16, majf=0, minf=110 00:36:29.379 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.379 issued rwts: total=2150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.379 filename0: (groupid=0, jobs=1): err= 0: pid=1537131: Tue Oct 8 18:52:21 2024 00:36:29.379 read: IOPS=337, BW=42.2MiB/s (44.2MB/s)(424MiB/10046msec) 00:36:29.379 slat (nsec): min=5848, max=40817, avg=6586.43, stdev=1139.42 00:36:29.379 clat (usec): min=4913, max=47787, avg=8871.75, stdev=1683.82 00:36:29.379 lat (usec): min=4919, max=47793, avg=8878.33, stdev=1683.88 00:36:29.379 clat percentiles (usec): 00:36:29.379 | 1.00th=[ 5932], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7439], 00:36:29.379 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9372], 00:36:29.379 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[10945], 00:36:29.379 | 99.00th=[11600], 99.50th=[11863], 99.90th=[12518], 99.95th=[46400], 00:36:29.379 | 99.99th=[47973] 00:36:29.379 bw ( KiB/s): min=40448, max=46592, per=38.77%, avg=43353.60, stdev=1676.19, samples=20 00:36:29.379 iops : min= 316, max= 364, avg=338.70, stdev=13.10, samples=20 00:36:29.379 lat (msec) : 10=76.22%, 20=23.72%, 50=0.06% 00:36:29.379 cpu : usr=92.90%, sys=6.84%, ctx=27, majf=0, minf=244 00:36:29.379 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.379 issued rwts: total=3389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.379 filename0: (groupid=0, jobs=1): err= 0: pid=1537132: Tue Oct 8 18:52:21 2024 00:36:29.379 read: IOPS=323, BW=40.4MiB/s (42.4MB/s)(405MiB/10005msec) 00:36:29.379 slat (nsec): min=5807, max=31855, avg=6746.37, stdev=1120.65 00:36:29.379 clat (usec): min=5173, max=53229, avg=9264.24, stdev=2973.71 00:36:29.379 lat (usec): min=5179, max=53235, avg=9270.99, stdev=2973.72 00:36:29.379 clat percentiles (usec): 00:36:29.379 | 1.00th=[ 6259], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 7635], 00:36:29.379 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9765], 00:36:29.379 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11338], 00:36:29.379 | 99.00th=[12125], 99.50th=[12911], 99.90th=[52691], 99.95th=[52691], 00:36:29.379 | 99.99th=[53216] 00:36:29.379 bw ( KiB/s): min=37632, max=44800, per=37.09%, avg=41472.00, stdev=1877.33, samples=19 00:36:29.379 iops : min= 294, max= 350, avg=324.00, stdev=14.67, samples=19 00:36:29.379 lat (msec) : 10=67.56%, 20=32.07%, 50=0.06%, 100=0.31% 00:36:29.379 cpu : usr=93.54%, sys=6.21%, ctx=23, majf=0, minf=107 00:36:29.379 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.379 issued rwts: total=3237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.379 00:36:29.379 Run status group 0 (all jobs): 00:36:29.379 READ: bw=109MiB/s (115MB/s), 26.8MiB/s-42.2MiB/s (28.1MB/s-44.2MB/s), io=1097MiB (1150MB), run=10005-10046msec 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.379 00:36:29.379 real 0m11.243s 00:36:29.379 user 0m42.675s 00:36:29.379 sys 0m2.200s 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:29.379 18:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.379 ************************************ 00:36:29.379 END TEST fio_dif_digest 00:36:29.379 ************************************ 00:36:29.379 18:52:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:29.379 18:52:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:29.379 rmmod nvme_tcp 00:36:29.379 rmmod nvme_fabrics 00:36:29.379 rmmod nvme_keyring 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1526986 ']' 00:36:29.379 18:52:21 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1526986 00:36:29.379 18:52:21 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1526986 ']' 00:36:29.379 18:52:21 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1526986 00:36:29.379 18:52:21 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:36:29.379 18:52:21 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:29.379 18:52:21 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526986 00:36:29.379 18:52:21 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:29.379 18:52:21 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:29.379 18:52:21 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526986' 00:36:29.379 killing process with pid 1526986 00:36:29.379 18:52:21 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1526986 00:36:29.379 18:52:21 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1526986 00:36:29.379 18:52:22 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:36:29.379 18:52:22 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:31.924 Waiting for block devices as requested 00:36:31.924 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:31.924 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:31.924 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:31.924 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:31.924 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:31.924 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:31.924 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:32.184 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:32.184 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:32.445 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:32.445 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:32.445 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:32.705 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:32.705 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:32.705 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:32.965 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:32.965 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:33.225 18:52:27 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:33.225 18:52:27 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:33.225 18:52:27 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:33.225 18:52:27 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:36:33.225 18:52:27 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:33.225 18:52:27 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:36:33.225 18:52:27 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:33.225 18:52:27 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:33.225 18:52:27 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.225 18:52:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:33.225 18:52:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.769 18:52:29 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:35.769 00:36:35.769 real 1m18.060s 00:36:35.769 user 7m54.809s 00:36:35.769 sys 0m22.391s 00:36:35.769 18:52:29 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:35.769 18:52:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:35.769 ************************************ 00:36:35.769 END TEST nvmf_dif 00:36:35.769 ************************************ 00:36:35.769 18:52:29 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:35.769 18:52:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:35.769 18:52:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:35.769 18:52:29 -- common/autotest_common.sh@10 -- # set +x 00:36:35.769 ************************************ 00:36:35.769 START TEST nvmf_abort_qd_sizes 00:36:35.769 ************************************ 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:35.769 * Looking for test storage... 00:36:35.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:35.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.769 --rc genhtml_branch_coverage=1 00:36:35.769 --rc genhtml_function_coverage=1 00:36:35.769 --rc genhtml_legend=1 00:36:35.769 --rc geninfo_all_blocks=1 00:36:35.769 --rc geninfo_unexecuted_blocks=1 00:36:35.769 00:36:35.769 ' 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:35.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.769 --rc genhtml_branch_coverage=1 00:36:35.769 --rc genhtml_function_coverage=1 00:36:35.769 --rc genhtml_legend=1 00:36:35.769 --rc geninfo_all_blocks=1 00:36:35.769 --rc geninfo_unexecuted_blocks=1 00:36:35.769 00:36:35.769 ' 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:35.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.769 --rc genhtml_branch_coverage=1 00:36:35.769 --rc genhtml_function_coverage=1 00:36:35.769 --rc genhtml_legend=1 00:36:35.769 --rc geninfo_all_blocks=1 00:36:35.769 --rc geninfo_unexecuted_blocks=1 00:36:35.769 00:36:35.769 ' 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:35.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.769 --rc genhtml_branch_coverage=1 00:36:35.769 --rc genhtml_function_coverage=1 00:36:35.769 --rc genhtml_legend=1 00:36:35.769 --rc geninfo_all_blocks=1 00:36:35.769 --rc geninfo_unexecuted_blocks=1 00:36:35.769 00:36:35.769 ' 00:36:35.769 18:52:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:35.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:35.770 18:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:43.907 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:43.908 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:43.908 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:43.908 Found net devices under 0000:31:00.0: cvl_0_0 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:43.908 Found net devices under 0000:31:00.1: cvl_0_1 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:43.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:43.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:36:43.908 00:36:43.908 --- 10.0.0.2 ping statistics --- 00:36:43.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.908 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:43.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:43.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:36:43.908 00:36:43.908 --- 10.0.0.1 ping statistics --- 00:36:43.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.908 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:36:43.908 18:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:46.450 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:46.450 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:46.450 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:46.450 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:46.450 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:46.711 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1546719 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1546719 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1546719 ']' 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:47.281 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.282 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:47.282 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:47.282 [2024-10-08 18:52:41.146375] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:36:47.282 [2024-10-08 18:52:41.146440] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:47.282 [2024-10-08 18:52:41.237760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:47.282 [2024-10-08 18:52:41.334425] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:47.282 [2024-10-08 18:52:41.334488] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:47.282 [2024-10-08 18:52:41.334497] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:47.282 [2024-10-08 18:52:41.334504] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:47.282 [2024-10-08 18:52:41.334510] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:47.282 [2024-10-08 18:52:41.336772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.282 [2024-10-08 18:52:41.336928] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:47.282 [2024-10-08 18:52:41.337096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:36:47.282 [2024-10-08 18:52:41.337289] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.225 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:48.225 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:36:48.225 18:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:48.225 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:48.225 18:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:48.225 18:52:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.225 ************************************ 00:36:48.225 START TEST spdk_target_abort 00:36:48.225 ************************************ 00:36:48.225 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:36:48.225 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:48.225 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:48.225 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.225 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.486 spdk_targetn1 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.486 [2024-10-08 18:52:42.371626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.486 [2024-10-08 18:52:42.411916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:48.486 18:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.486 [2024-10-08 18:52:42.538498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:560 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:36:48.486 [2024-10-08 18:52:42.538524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0049 p:1 m:0 dnr:0 00:36:48.747 [2024-10-08 18:52:42.577567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2248 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:36:48.747 [2024-10-08 18:52:42.577584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:48.747 [2024-10-08 18:52:42.583979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2528 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:36:48.747 [2024-10-08 18:52:42.583993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:52.048 Initializing NVMe Controllers 00:36:52.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:52.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:52.048 Initialization complete. Launching workers. 00:36:52.048 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17368, failed: 3 00:36:52.048 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 4237, failed to submit 13134 00:36:52.048 success 701, unsuccessful 3536, failed 0 00:36:52.048 18:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.048 18:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.048 [2024-10-08 18:52:45.693151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:624 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:36:52.048 [2024-10-08 18:52:45.693192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:36:52.048 [2024-10-08 18:52:45.701039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:904 len:8 PRP1 0x200007c46000 PRP2 0x0 00:36:52.048 [2024-10-08 18:52:45.701067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:36:52.048 [2024-10-08 18:52:45.717122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:1152 len:8 PRP1 0x200007c56000 PRP2 0x0 00:36:52.048 [2024-10-08 18:52:45.717143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:009a p:1 m:0 dnr:0 00:36:52.048 [2024-10-08 18:52:45.796716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:2992 len:8 PRP1 0x200007c52000 PRP2 0x0 00:36:52.048 [2024-10-08 18:52:45.796739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:53.433 [2024-10-08 18:52:47.341907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:38184 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:36:53.433 [2024-10-08 18:52:47.341945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00a7 p:1 m:0 dnr:0 00:36:53.433 [2024-10-08 18:52:47.478137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:41056 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:36:53.433 [2024-10-08 18:52:47.478163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:36:54.374 [2024-10-08 18:52:48.262838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:58632 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:36:54.374 [2024-10-08 18:52:48.262862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:00ab p:1 m:0 dnr:0 00:36:54.946 Initializing NVMe Controllers 00:36:54.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:54.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:54.946 Initialization complete. Launching workers. 00:36:54.946 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8490, failed: 7 00:36:54.946 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1203, failed to submit 7294 00:36:54.946 success 344, unsuccessful 859, failed 0 00:36:54.946 18:52:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:54.946 18:52:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:55.207 [2024-10-08 18:52:49.042616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1984 len:8 PRP1 0x200007902000 PRP2 0x0 00:36:55.207 [2024-10-08 18:52:49.042641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00db p:1 m:0 dnr:0 00:36:58.509 Initializing NVMe Controllers 00:36:58.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:58.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:58.509 Initialization complete. Launching workers. 00:36:58.509 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43801, failed: 1 00:36:58.509 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2736, failed to submit 41066 00:36:58.509 success 587, unsuccessful 2149, failed 0 00:36:58.509 18:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:58.509 18:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.509 18:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.509 18:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.509 18:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:58.509 18:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.509 18:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.893 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.893 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1546719 00:36:59.893 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1546719 ']' 00:36:59.893 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1546719 00:36:59.893 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:36:59.893 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:59.893 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1546719 00:37:00.154 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:00.154 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:00.154 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1546719' 00:37:00.154 killing process with pid 1546719 00:37:00.154 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1546719 00:37:00.154 18:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1546719 00:37:00.154 00:37:00.154 real 0m12.041s 00:37:00.154 user 0m48.982s 00:37:00.154 sys 0m1.949s 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:00.154 ************************************ 00:37:00.154 END TEST spdk_target_abort 00:37:00.154 ************************************ 00:37:00.154 18:52:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:00.154 18:52:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:00.154 18:52:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:00.154 18:52:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:00.154 ************************************ 00:37:00.154 START TEST kernel_target_abort 00:37:00.154 ************************************ 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:37:00.154 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:37:00.415 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:00.415 18:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:03.715 Waiting for block devices as requested 00:37:03.715 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:03.976 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:03.976 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:03.976 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:04.236 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:04.236 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:04.236 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:04.236 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:04.496 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:04.496 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:04.756 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:04.756 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:04.756 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:05.016 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:05.016 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:05.016 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:05.277 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:05.538 No valid GPT data, bailing 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:05.538 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:05.539 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:37:05.800 00:37:05.800 Discovery Log Number of Records 2, Generation counter 2 00:37:05.800 =====Discovery Log Entry 0====== 00:37:05.800 trtype: tcp 00:37:05.800 adrfam: ipv4 00:37:05.800 subtype: current discovery subsystem 00:37:05.800 treq: not specified, sq flow control disable supported 00:37:05.800 portid: 1 00:37:05.800 trsvcid: 4420 00:37:05.800 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:05.800 traddr: 10.0.0.1 00:37:05.800 eflags: none 00:37:05.800 sectype: none 00:37:05.800 =====Discovery Log Entry 1====== 00:37:05.800 trtype: tcp 00:37:05.800 adrfam: ipv4 00:37:05.800 subtype: nvme subsystem 00:37:05.800 treq: not specified, sq flow control disable supported 00:37:05.800 portid: 1 00:37:05.800 trsvcid: 4420 00:37:05.800 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:05.800 traddr: 10.0.0.1 00:37:05.800 eflags: none 00:37:05.800 sectype: none 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:05.800 18:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:09.179 Initializing NVMe Controllers 00:37:09.179 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:09.179 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:09.179 Initialization complete. Launching workers. 00:37:09.179 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67344, failed: 0 00:37:09.179 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67344, failed to submit 0 00:37:09.179 success 0, unsuccessful 67344, failed 0 00:37:09.179 18:53:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:09.179 18:53:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:12.479 Initializing NVMe Controllers 00:37:12.479 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:12.479 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:12.479 Initialization complete. Launching workers. 00:37:12.479 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117800, failed: 0 00:37:12.479 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29630, failed to submit 88170 00:37:12.479 success 0, unsuccessful 29630, failed 0 00:37:12.479 18:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:12.479 18:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:15.024 Initializing NVMe Controllers 00:37:15.024 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:15.024 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:15.024 Initialization complete. Launching workers. 00:37:15.024 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145825, failed: 0 00:37:15.024 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36502, failed to submit 109323 00:37:15.024 success 0, unsuccessful 36502, failed 0 00:37:15.024 18:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:15.024 18:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:15.024 18:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:37:15.024 18:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:15.024 18:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:15.024 18:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:15.024 18:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:15.024 18:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:37:15.024 18:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:37:15.024 18:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:19.232 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:19.232 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:20.617 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:20.877 00:37:20.877 real 0m20.570s 00:37:20.877 user 0m9.877s 00:37:20.877 sys 0m6.316s 00:37:20.877 18:53:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:20.877 18:53:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:20.877 ************************************ 00:37:20.877 END TEST kernel_target_abort 00:37:20.877 ************************************ 00:37:20.877 18:53:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:20.877 18:53:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:20.877 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:20.877 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:20.877 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:20.878 rmmod nvme_tcp 00:37:20.878 rmmod nvme_fabrics 00:37:20.878 rmmod nvme_keyring 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1546719 ']' 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1546719 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1546719 ']' 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1546719 00:37:20.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1546719) - No such process 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1546719 is not found' 00:37:20.878 Process with pid 1546719 is not found 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:37:20.878 18:53:14 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:24.176 Waiting for block devices as requested 00:37:24.437 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:24.437 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:24.437 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:24.697 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:24.697 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:24.697 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:24.958 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:24.958 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:24.958 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:25.219 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:25.219 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:25.480 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:25.480 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:25.480 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:25.480 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:25.740 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:25.740 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:26.001 18:53:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:28.038 18:53:22 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:28.038 00:37:28.038 real 0m52.731s 00:37:28.038 user 1m4.370s 00:37:28.038 sys 0m19.453s 00:37:28.038 18:53:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:28.038 18:53:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:28.038 ************************************ 00:37:28.038 END TEST nvmf_abort_qd_sizes 00:37:28.038 ************************************ 00:37:28.298 18:53:22 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:28.298 18:53:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:28.299 18:53:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:28.299 18:53:22 -- common/autotest_common.sh@10 -- # set +x 00:37:28.299 ************************************ 00:37:28.299 START TEST keyring_file 00:37:28.299 ************************************ 00:37:28.299 18:53:22 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:28.299 * Looking for test storage... 00:37:28.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:28.299 18:53:22 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:28.299 18:53:22 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:37:28.299 18:53:22 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:28.559 18:53:22 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:28.559 18:53:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:28.560 18:53:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:28.560 18:53:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:28.560 18:53:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:28.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.560 --rc genhtml_branch_coverage=1 00:37:28.560 --rc genhtml_function_coverage=1 00:37:28.560 --rc genhtml_legend=1 00:37:28.560 --rc geninfo_all_blocks=1 00:37:28.560 --rc geninfo_unexecuted_blocks=1 00:37:28.560 00:37:28.560 ' 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:28.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.560 --rc genhtml_branch_coverage=1 00:37:28.560 --rc genhtml_function_coverage=1 00:37:28.560 --rc genhtml_legend=1 00:37:28.560 --rc geninfo_all_blocks=1 00:37:28.560 --rc geninfo_unexecuted_blocks=1 00:37:28.560 00:37:28.560 ' 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:28.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.560 --rc genhtml_branch_coverage=1 00:37:28.560 --rc genhtml_function_coverage=1 00:37:28.560 --rc genhtml_legend=1 00:37:28.560 --rc geninfo_all_blocks=1 00:37:28.560 --rc geninfo_unexecuted_blocks=1 00:37:28.560 00:37:28.560 ' 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:28.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.560 --rc genhtml_branch_coverage=1 00:37:28.560 --rc genhtml_function_coverage=1 00:37:28.560 --rc genhtml_legend=1 00:37:28.560 --rc geninfo_all_blocks=1 00:37:28.560 --rc geninfo_unexecuted_blocks=1 00:37:28.560 00:37:28.560 ' 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:28.560 18:53:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:28.560 18:53:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:28.560 18:53:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.560 18:53:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.560 18:53:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.560 18:53:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.560 18:53:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.560 18:53:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:28.560 18:53:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:28.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AxCJ9NTBeO 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AxCJ9NTBeO 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AxCJ9NTBeO 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AxCJ9NTBeO 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.luWdsGiX1f 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:28.560 18:53:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.luWdsGiX1f 00:37:28.560 18:53:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.luWdsGiX1f 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.luWdsGiX1f 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=1557313 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1557313 00:37:28.560 18:53:22 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1557313 ']' 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:28.560 18:53:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:28.560 [2024-10-08 18:53:22.574023] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:37:28.561 [2024-10-08 18:53:22.574078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557313 ] 00:37:28.821 [2024-10-08 18:53:22.649594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.821 [2024-10-08 18:53:22.716546] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:29.392 18:53:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:29.392 [2024-10-08 18:53:23.356419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.392 null0 00:37:29.392 [2024-10-08 18:53:23.388465] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:29.392 [2024-10-08 18:53:23.388809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.392 18:53:23 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:29.392 [2024-10-08 18:53:23.420535] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:29.392 request: 00:37:29.392 { 00:37:29.392 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:29.392 "secure_channel": false, 00:37:29.392 "listen_address": { 00:37:29.392 "trtype": "tcp", 00:37:29.392 "traddr": "127.0.0.1", 00:37:29.392 "trsvcid": "4420" 00:37:29.392 }, 00:37:29.392 "method": "nvmf_subsystem_add_listener", 00:37:29.392 "req_id": 1 00:37:29.392 } 00:37:29.392 Got JSON-RPC error response 00:37:29.392 response: 00:37:29.392 { 00:37:29.392 "code": -32602, 00:37:29.392 "message": "Invalid parameters" 00:37:29.392 } 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:29.392 18:53:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=1557339 00:37:29.392 18:53:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1557339 /var/tmp/bperf.sock 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1557339 ']' 00:37:29.392 18:53:23 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:29.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:29.392 18:53:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:29.653 [2024-10-08 18:53:23.479917] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:37:29.653 [2024-10-08 18:53:23.479970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557339 ] 00:37:29.653 [2024-10-08 18:53:23.558404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.653 [2024-10-08 18:53:23.631379] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.596 18:53:24 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:30.596 18:53:24 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:30.596 18:53:24 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AxCJ9NTBeO 00:37:30.596 18:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AxCJ9NTBeO 00:37:30.596 18:53:24 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.luWdsGiX1f 00:37:30.596 18:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.luWdsGiX1f 00:37:30.857 18:53:24 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:30.857 18:53:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:30.857 18:53:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.857 18:53:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:30.857 18:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.857 18:53:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AxCJ9NTBeO == \/\t\m\p\/\t\m\p\.\A\x\C\J\9\N\T\B\e\O ]] 00:37:30.857 18:53:24 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:30.857 18:53:24 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:30.857 18:53:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.857 18:53:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:30.857 18:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.117 18:53:25 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.luWdsGiX1f == \/\t\m\p\/\t\m\p\.\l\u\W\d\s\G\i\X\1\f ]] 00:37:31.117 18:53:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:31.117 18:53:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:31.117 18:53:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.117 18:53:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.117 18:53:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.117 18:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.378 18:53:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:31.378 18:53:25 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:31.378 18:53:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:31.378 18:53:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.378 18:53:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.378 18:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.378 18:53:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:31.378 18:53:25 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:31.378 18:53:25 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:31.378 18:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:31.640 [2024-10-08 18:53:25.527604] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:31.640 nvme0n1 00:37:31.640 18:53:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:31.640 18:53:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:31.640 18:53:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.640 18:53:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.640 18:53:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.640 18:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.901 18:53:25 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:31.901 18:53:25 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:31.901 18:53:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:31.901 18:53:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.901 18:53:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.901 18:53:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:31.901 18:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.162 18:53:25 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:32.162 18:53:25 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:32.162 Running I/O for 1 seconds... 00:37:33.102 20873.00 IOPS, 81.54 MiB/s 00:37:33.102 Latency(us) 00:37:33.102 [2024-10-08T16:53:27.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.102 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:33.102 nvme0n1 : 1.00 20922.98 81.73 0.00 0.00 6107.53 3317.76 10212.69 00:37:33.102 [2024-10-08T16:53:27.159Z] =================================================================================================================== 00:37:33.102 [2024-10-08T16:53:27.159Z] Total : 20922.98 81.73 0.00 0.00 6107.53 3317.76 10212.69 00:37:33.102 { 00:37:33.102 "results": [ 00:37:33.102 { 00:37:33.102 "job": "nvme0n1", 00:37:33.102 "core_mask": "0x2", 00:37:33.102 "workload": "randrw", 00:37:33.102 "percentage": 50, 00:37:33.102 "status": "finished", 00:37:33.102 "queue_depth": 128, 00:37:33.102 "io_size": 4096, 00:37:33.102 "runtime": 1.003729, 00:37:33.102 "iops": 20922.9782142391, 00:37:33.102 "mibps": 81.73038364937149, 00:37:33.102 "io_failed": 0, 00:37:33.102 "io_timeout": 0, 00:37:33.102 "avg_latency_us": 6107.527832008, 00:37:33.102 "min_latency_us": 3317.76, 00:37:33.102 "max_latency_us": 10212.693333333333 00:37:33.102 } 00:37:33.102 ], 00:37:33.102 "core_count": 1 00:37:33.102 } 00:37:33.102 18:53:27 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:33.102 18:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:33.363 18:53:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:33.363 18:53:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:33.363 18:53:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.363 18:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.363 18:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:33.363 18:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.624 18:53:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:33.624 18:53:27 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:33.624 18:53:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:33.624 18:53:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.624 18:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:33.624 18:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.624 18:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.624 18:53:27 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:33.624 18:53:27 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:33.624 18:53:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:33.624 18:53:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:33.624 18:53:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:33.624 18:53:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:33.624 18:53:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:33.624 18:53:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:33.624 18:53:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:33.624 18:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:33.885 [2024-10-08 18:53:27.771242] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:33.885 [2024-10-08 18:53:27.771779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aca80 (107): Transport endpoint is not connected 00:37:33.885 [2024-10-08 18:53:27.772775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aca80 (9): Bad file descriptor 00:37:33.885 [2024-10-08 18:53:27.773776] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:33.885 [2024-10-08 18:53:27.773784] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:33.885 [2024-10-08 18:53:27.773789] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:33.885 [2024-10-08 18:53:27.773796] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:33.885 request: 00:37:33.885 { 00:37:33.885 "name": "nvme0", 00:37:33.885 "trtype": "tcp", 00:37:33.885 "traddr": "127.0.0.1", 00:37:33.885 "adrfam": "ipv4", 00:37:33.885 "trsvcid": "4420", 00:37:33.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:33.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:33.885 "prchk_reftag": false, 00:37:33.885 "prchk_guard": false, 00:37:33.885 "hdgst": false, 00:37:33.885 "ddgst": false, 00:37:33.885 "psk": "key1", 00:37:33.885 "allow_unrecognized_csi": false, 00:37:33.885 "method": "bdev_nvme_attach_controller", 00:37:33.885 "req_id": 1 00:37:33.885 } 00:37:33.885 Got JSON-RPC error response 00:37:33.885 response: 00:37:33.885 { 00:37:33.885 "code": -5, 00:37:33.885 "message": "Input/output error" 00:37:33.885 } 00:37:33.885 18:53:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:33.885 18:53:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:33.885 18:53:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:33.885 18:53:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:33.885 18:53:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:33.885 18:53:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:33.885 18:53:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.885 18:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.885 18:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.885 18:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.146 18:53:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:34.146 18:53:27 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:34.146 18:53:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:34.146 18:53:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.146 18:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.146 18:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.146 18:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.146 18:53:28 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:34.146 18:53:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:34.146 18:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:34.409 18:53:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:34.409 18:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:34.669 18:53:28 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:34.669 18:53:28 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:34.669 18:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.669 18:53:28 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:34.669 18:53:28 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.AxCJ9NTBeO 00:37:34.669 18:53:28 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AxCJ9NTBeO 00:37:34.669 18:53:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:34.669 18:53:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AxCJ9NTBeO 00:37:34.669 18:53:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:34.669 18:53:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:34.669 18:53:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:34.669 18:53:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:34.669 18:53:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AxCJ9NTBeO 00:37:34.669 18:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AxCJ9NTBeO 00:37:34.929 [2024-10-08 18:53:28.830514] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AxCJ9NTBeO': 0100660 00:37:34.929 [2024-10-08 18:53:28.830536] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:34.929 request: 00:37:34.929 { 00:37:34.929 "name": "key0", 00:37:34.929 "path": "/tmp/tmp.AxCJ9NTBeO", 00:37:34.929 "method": "keyring_file_add_key", 00:37:34.929 "req_id": 1 00:37:34.929 } 00:37:34.929 Got JSON-RPC error response 00:37:34.929 response: 00:37:34.929 { 00:37:34.929 "code": -1, 00:37:34.929 "message": "Operation not permitted" 00:37:34.929 } 00:37:34.929 18:53:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:34.929 18:53:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:34.929 18:53:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:34.929 18:53:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:34.929 18:53:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.AxCJ9NTBeO 00:37:34.929 18:53:28 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AxCJ9NTBeO 00:37:34.929 18:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AxCJ9NTBeO 00:37:35.190 18:53:29 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.AxCJ9NTBeO 00:37:35.190 18:53:29 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:35.190 18:53:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:35.190 18:53:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.190 18:53:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.190 18:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.190 18:53:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:35.190 18:53:29 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:35.190 18:53:29 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.451 18:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.451 [2024-10-08 18:53:29.403968] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AxCJ9NTBeO': No such file or directory 00:37:35.451 [2024-10-08 18:53:29.403987] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:35.451 [2024-10-08 18:53:29.404000] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:35.451 [2024-10-08 18:53:29.404006] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:35.451 [2024-10-08 18:53:29.404013] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:35.451 [2024-10-08 18:53:29.404018] bdev_nvme.c:6541:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:35.451 request: 00:37:35.451 { 00:37:35.451 "name": "nvme0", 00:37:35.451 "trtype": "tcp", 00:37:35.451 "traddr": "127.0.0.1", 00:37:35.451 "adrfam": "ipv4", 00:37:35.451 "trsvcid": "4420", 00:37:35.451 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:35.451 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:35.451 "prchk_reftag": false, 00:37:35.451 "prchk_guard": false, 00:37:35.451 "hdgst": false, 00:37:35.451 "ddgst": false, 00:37:35.451 "psk": "key0", 00:37:35.451 "allow_unrecognized_csi": false, 00:37:35.451 "method": "bdev_nvme_attach_controller", 00:37:35.451 "req_id": 1 00:37:35.451 } 00:37:35.451 Got JSON-RPC error response 00:37:35.451 response: 00:37:35.451 { 00:37:35.451 "code": -19, 00:37:35.451 "message": "No such device" 00:37:35.451 } 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:35.451 18:53:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:35.451 18:53:29 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:35.451 18:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:35.712 18:53:29 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:35.712 18:53:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:35.712 18:53:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:35.712 18:53:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:35.712 18:53:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:35.712 18:53:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:35.712 18:53:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hhlQrCUATm 00:37:35.712 18:53:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:35.712 18:53:29 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:35.712 18:53:29 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:35.712 18:53:29 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:35.712 18:53:29 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:35.712 18:53:29 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:35.712 18:53:29 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:35.712 18:53:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hhlQrCUATm 00:37:35.712 18:53:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hhlQrCUATm 00:37:35.712 18:53:29 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.hhlQrCUATm 00:37:35.712 18:53:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hhlQrCUATm 00:37:35.712 18:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hhlQrCUATm 00:37:35.972 18:53:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.972 18:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.972 nvme0n1 00:37:36.232 18:53:30 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:36.232 18:53:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:36.232 18:53:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.232 18:53:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.232 18:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.232 18:53:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.232 18:53:30 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:36.232 18:53:30 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:36.232 18:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:36.493 18:53:30 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:36.493 18:53:30 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:36.493 18:53:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.493 18:53:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.493 18:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.493 18:53:30 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:36.753 18:53:30 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:36.753 18:53:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:36.753 18:53:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.754 18:53:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.754 18:53:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.754 18:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.754 18:53:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:36.754 18:53:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:36.754 18:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:37.014 18:53:30 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:37.014 18:53:30 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:37.014 18:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.275 18:53:31 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:37.275 18:53:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hhlQrCUATm 00:37:37.275 18:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hhlQrCUATm 00:37:37.275 18:53:31 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.luWdsGiX1f 00:37:37.275 18:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.luWdsGiX1f 00:37:37.536 18:53:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:37.536 18:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:37.796 nvme0n1 00:37:37.796 18:53:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:37.796 18:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:38.058 18:53:31 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:38.058 "subsystems": [ 00:37:38.058 { 00:37:38.058 "subsystem": "keyring", 00:37:38.058 "config": [ 00:37:38.058 { 00:37:38.058 "method": "keyring_file_add_key", 00:37:38.058 "params": { 00:37:38.058 "name": "key0", 00:37:38.058 "path": "/tmp/tmp.hhlQrCUATm" 00:37:38.058 } 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "method": "keyring_file_add_key", 00:37:38.058 "params": { 00:37:38.058 "name": "key1", 00:37:38.058 "path": "/tmp/tmp.luWdsGiX1f" 00:37:38.058 } 00:37:38.058 } 00:37:38.058 ] 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "subsystem": "iobuf", 00:37:38.058 "config": [ 00:37:38.058 { 00:37:38.058 "method": "iobuf_set_options", 00:37:38.058 "params": { 00:37:38.058 "small_pool_count": 8192, 00:37:38.058 "large_pool_count": 1024, 00:37:38.058 "small_bufsize": 8192, 00:37:38.058 "large_bufsize": 135168 00:37:38.058 } 00:37:38.058 } 00:37:38.058 ] 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "subsystem": "sock", 00:37:38.058 "config": [ 00:37:38.058 { 00:37:38.058 "method": "sock_set_default_impl", 00:37:38.058 "params": { 00:37:38.058 "impl_name": "posix" 00:37:38.058 } 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "method": "sock_impl_set_options", 00:37:38.058 "params": { 00:37:38.058 "impl_name": "ssl", 00:37:38.058 "recv_buf_size": 4096, 00:37:38.058 "send_buf_size": 4096, 00:37:38.058 "enable_recv_pipe": true, 00:37:38.058 "enable_quickack": false, 00:37:38.058 "enable_placement_id": 0, 00:37:38.058 "enable_zerocopy_send_server": true, 00:37:38.058 "enable_zerocopy_send_client": false, 00:37:38.058 "zerocopy_threshold": 0, 00:37:38.058 "tls_version": 0, 00:37:38.058 "enable_ktls": false 00:37:38.058 } 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "method": "sock_impl_set_options", 00:37:38.058 "params": { 00:37:38.058 "impl_name": "posix", 00:37:38.058 "recv_buf_size": 2097152, 00:37:38.058 "send_buf_size": 2097152, 00:37:38.058 "enable_recv_pipe": true, 00:37:38.058 "enable_quickack": false, 00:37:38.058 "enable_placement_id": 0, 00:37:38.058 "enable_zerocopy_send_server": true, 00:37:38.058 "enable_zerocopy_send_client": false, 00:37:38.058 "zerocopy_threshold": 0, 00:37:38.058 "tls_version": 0, 00:37:38.058 "enable_ktls": false 00:37:38.058 } 00:37:38.058 } 00:37:38.058 ] 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "subsystem": "vmd", 00:37:38.058 "config": [] 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "subsystem": "accel", 00:37:38.058 "config": [ 00:37:38.058 { 00:37:38.058 "method": "accel_set_options", 00:37:38.058 "params": { 00:37:38.058 "small_cache_size": 128, 00:37:38.058 "large_cache_size": 16, 00:37:38.058 "task_count": 2048, 00:37:38.058 "sequence_count": 2048, 00:37:38.058 "buf_count": 2048 00:37:38.058 } 00:37:38.058 } 00:37:38.058 ] 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "subsystem": "bdev", 00:37:38.058 "config": [ 00:37:38.058 { 00:37:38.058 "method": "bdev_set_options", 00:37:38.058 "params": { 00:37:38.058 "bdev_io_pool_size": 65535, 00:37:38.058 "bdev_io_cache_size": 256, 00:37:38.058 "bdev_auto_examine": true, 00:37:38.058 "iobuf_small_cache_size": 128, 00:37:38.058 "iobuf_large_cache_size": 16 00:37:38.058 } 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "method": "bdev_raid_set_options", 00:37:38.058 "params": { 00:37:38.058 "process_window_size_kb": 1024, 00:37:38.058 "process_max_bandwidth_mb_sec": 0 00:37:38.058 } 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "method": "bdev_iscsi_set_options", 00:37:38.058 "params": { 00:37:38.058 "timeout_sec": 30 00:37:38.058 } 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "method": "bdev_nvme_set_options", 00:37:38.058 "params": { 00:37:38.058 "action_on_timeout": "none", 00:37:38.058 "timeout_us": 0, 00:37:38.058 "timeout_admin_us": 0, 00:37:38.058 "keep_alive_timeout_ms": 10000, 00:37:38.058 "arbitration_burst": 0, 00:37:38.058 "low_priority_weight": 0, 00:37:38.058 "medium_priority_weight": 0, 00:37:38.058 "high_priority_weight": 0, 00:37:38.058 "nvme_adminq_poll_period_us": 10000, 00:37:38.058 "nvme_ioq_poll_period_us": 0, 00:37:38.058 "io_queue_requests": 512, 00:37:38.058 "delay_cmd_submit": true, 00:37:38.058 "transport_retry_count": 4, 00:37:38.058 "bdev_retry_count": 3, 00:37:38.058 "transport_ack_timeout": 0, 00:37:38.058 "ctrlr_loss_timeout_sec": 0, 00:37:38.058 "reconnect_delay_sec": 0, 00:37:38.058 "fast_io_fail_timeout_sec": 0, 00:37:38.058 "disable_auto_failback": false, 00:37:38.058 "generate_uuids": false, 00:37:38.058 "transport_tos": 0, 00:37:38.058 "nvme_error_stat": false, 00:37:38.058 "rdma_srq_size": 0, 00:37:38.058 "io_path_stat": false, 00:37:38.058 "allow_accel_sequence": false, 00:37:38.058 "rdma_max_cq_size": 0, 00:37:38.058 "rdma_cm_event_timeout_ms": 0, 00:37:38.058 "dhchap_digests": [ 00:37:38.058 "sha256", 00:37:38.058 "sha384", 00:37:38.058 "sha512" 00:37:38.058 ], 00:37:38.058 "dhchap_dhgroups": [ 00:37:38.058 "null", 00:37:38.058 "ffdhe2048", 00:37:38.058 "ffdhe3072", 00:37:38.058 "ffdhe4096", 00:37:38.058 "ffdhe6144", 00:37:38.058 "ffdhe8192" 00:37:38.058 ] 00:37:38.058 } 00:37:38.058 }, 00:37:38.058 { 00:37:38.058 "method": "bdev_nvme_attach_controller", 00:37:38.058 "params": { 00:37:38.058 "name": "nvme0", 00:37:38.058 "trtype": "TCP", 00:37:38.058 "adrfam": "IPv4", 00:37:38.059 "traddr": "127.0.0.1", 00:37:38.059 "trsvcid": "4420", 00:37:38.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:38.059 "prchk_reftag": false, 00:37:38.059 "prchk_guard": false, 00:37:38.059 "ctrlr_loss_timeout_sec": 0, 00:37:38.059 "reconnect_delay_sec": 0, 00:37:38.059 "fast_io_fail_timeout_sec": 0, 00:37:38.059 "psk": "key0", 00:37:38.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:38.059 "hdgst": false, 00:37:38.059 "ddgst": false, 00:37:38.059 "multipath": "multipath" 00:37:38.059 } 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "method": "bdev_nvme_set_hotplug", 00:37:38.059 "params": { 00:37:38.059 "period_us": 100000, 00:37:38.059 "enable": false 00:37:38.059 } 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "method": "bdev_wait_for_examine" 00:37:38.059 } 00:37:38.059 ] 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "subsystem": "nbd", 00:37:38.059 "config": [] 00:37:38.059 } 00:37:38.059 ] 00:37:38.059 }' 00:37:38.059 18:53:31 keyring_file -- keyring/file.sh@115 -- # killprocess 1557339 00:37:38.059 18:53:31 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1557339 ']' 00:37:38.059 18:53:31 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1557339 00:37:38.059 18:53:31 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:38.059 18:53:31 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:38.059 18:53:31 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1557339 00:37:38.059 18:53:31 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:38.059 18:53:31 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:38.059 18:53:31 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1557339' 00:37:38.059 killing process with pid 1557339 00:37:38.059 18:53:31 keyring_file -- common/autotest_common.sh@969 -- # kill 1557339 00:37:38.059 Received shutdown signal, test time was about 1.000000 seconds 00:37:38.059 00:37:38.059 Latency(us) 00:37:38.059 [2024-10-08T16:53:32.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.059 [2024-10-08T16:53:32.116Z] =================================================================================================================== 00:37:38.059 [2024-10-08T16:53:32.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:38.059 18:53:31 keyring_file -- common/autotest_common.sh@974 -- # wait 1557339 00:37:38.059 18:53:32 keyring_file -- keyring/file.sh@118 -- # bperfpid=1559149 00:37:38.059 18:53:32 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1559149 /var/tmp/bperf.sock 00:37:38.059 18:53:32 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1559149 ']' 00:37:38.059 18:53:32 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:38.059 18:53:32 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:38.059 18:53:32 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:38.059 18:53:32 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:38.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:38.059 18:53:32 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:38.059 18:53:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:38.059 18:53:32 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:38.059 "subsystems": [ 00:37:38.059 { 00:37:38.059 "subsystem": "keyring", 00:37:38.059 "config": [ 00:37:38.059 { 00:37:38.059 "method": "keyring_file_add_key", 00:37:38.059 "params": { 00:37:38.059 "name": "key0", 00:37:38.059 "path": "/tmp/tmp.hhlQrCUATm" 00:37:38.059 } 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "method": "keyring_file_add_key", 00:37:38.059 "params": { 00:37:38.059 "name": "key1", 00:37:38.059 "path": "/tmp/tmp.luWdsGiX1f" 00:37:38.059 } 00:37:38.059 } 00:37:38.059 ] 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "subsystem": "iobuf", 00:37:38.059 "config": [ 00:37:38.059 { 00:37:38.059 "method": "iobuf_set_options", 00:37:38.059 "params": { 00:37:38.059 "small_pool_count": 8192, 00:37:38.059 "large_pool_count": 1024, 00:37:38.059 "small_bufsize": 8192, 00:37:38.059 "large_bufsize": 135168 00:37:38.059 } 00:37:38.059 } 00:37:38.059 ] 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "subsystem": "sock", 00:37:38.059 "config": [ 00:37:38.059 { 00:37:38.059 "method": "sock_set_default_impl", 00:37:38.059 "params": { 00:37:38.059 "impl_name": "posix" 00:37:38.059 } 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "method": "sock_impl_set_options", 00:37:38.059 "params": { 00:37:38.059 "impl_name": "ssl", 00:37:38.059 "recv_buf_size": 4096, 00:37:38.059 "send_buf_size": 4096, 00:37:38.059 "enable_recv_pipe": true, 00:37:38.059 "enable_quickack": false, 00:37:38.059 "enable_placement_id": 0, 00:37:38.059 "enable_zerocopy_send_server": true, 00:37:38.059 "enable_zerocopy_send_client": false, 00:37:38.059 "zerocopy_threshold": 0, 00:37:38.059 "tls_version": 0, 00:37:38.059 "enable_ktls": false 00:37:38.059 } 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "method": "sock_impl_set_options", 00:37:38.059 "params": { 00:37:38.059 "impl_name": "posix", 00:37:38.059 "recv_buf_size": 2097152, 00:37:38.059 "send_buf_size": 2097152, 00:37:38.059 "enable_recv_pipe": true, 00:37:38.059 "enable_quickack": false, 00:37:38.059 "enable_placement_id": 0, 00:37:38.059 "enable_zerocopy_send_server": true, 00:37:38.059 "enable_zerocopy_send_client": false, 00:37:38.059 "zerocopy_threshold": 0, 00:37:38.059 "tls_version": 0, 00:37:38.059 "enable_ktls": false 00:37:38.059 } 00:37:38.059 } 00:37:38.059 ] 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "subsystem": "vmd", 00:37:38.059 "config": [] 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "subsystem": "accel", 00:37:38.059 "config": [ 00:37:38.059 { 00:37:38.059 "method": "accel_set_options", 00:37:38.059 "params": { 00:37:38.059 "small_cache_size": 128, 00:37:38.059 "large_cache_size": 16, 00:37:38.059 "task_count": 2048, 00:37:38.059 "sequence_count": 2048, 00:37:38.059 "buf_count": 2048 00:37:38.059 } 00:37:38.059 } 00:37:38.059 ] 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "subsystem": "bdev", 00:37:38.059 "config": [ 00:37:38.059 { 00:37:38.059 "method": "bdev_set_options", 00:37:38.059 "params": { 00:37:38.059 "bdev_io_pool_size": 65535, 00:37:38.059 "bdev_io_cache_size": 256, 00:37:38.059 "bdev_auto_examine": true, 00:37:38.059 "iobuf_small_cache_size": 128, 00:37:38.059 "iobuf_large_cache_size": 16 00:37:38.059 } 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "method": "bdev_raid_set_options", 00:37:38.059 "params": { 00:37:38.059 "process_window_size_kb": 1024, 00:37:38.059 "process_max_bandwidth_mb_sec": 0 00:37:38.059 } 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "method": "bdev_iscsi_set_options", 00:37:38.059 "params": { 00:37:38.059 "timeout_sec": 30 00:37:38.059 } 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "method": "bdev_nvme_set_options", 00:37:38.059 "params": { 00:37:38.059 "action_on_timeout": "none", 00:37:38.059 "timeout_us": 0, 00:37:38.059 "timeout_admin_us": 0, 00:37:38.059 "keep_alive_timeout_ms": 10000, 00:37:38.059 "arbitration_burst": 0, 00:37:38.059 "low_priority_weight": 0, 00:37:38.059 "medium_priority_weight": 0, 00:37:38.059 "high_priority_weight": 0, 00:37:38.059 "nvme_adminq_poll_period_us": 10000, 00:37:38.059 "nvme_ioq_poll_period_us": 0, 00:37:38.059 "io_queue_requests": 512, 00:37:38.059 "delay_cmd_submit": true, 00:37:38.059 "transport_retry_count": 4, 00:37:38.059 "bdev_retry_count": 3, 00:37:38.059 "transport_ack_timeout": 0, 00:37:38.059 "ctrlr_loss_timeout_sec": 0, 00:37:38.059 "reconnect_delay_sec": 0, 00:37:38.059 "fast_io_fail_timeout_sec": 0, 00:37:38.059 "disable_auto_failback": false, 00:37:38.059 "generate_uuids": false, 00:37:38.059 "transport_tos": 0, 00:37:38.059 "nvme_error_stat": false, 00:37:38.059 "rdma_srq_size": 0, 00:37:38.059 "io_path_stat": false, 00:37:38.059 "allow_accel_sequence": false, 00:37:38.059 "rdma_max_cq_size": 0, 00:37:38.059 "rdma_cm_event_timeout_ms": 0, 00:37:38.059 "dhchap_digests": [ 00:37:38.059 "sha256", 00:37:38.059 "sha384", 00:37:38.059 "sha512" 00:37:38.059 ], 00:37:38.059 "dhchap_dhgroups": [ 00:37:38.059 "null", 00:37:38.059 "ffdhe2048", 00:37:38.059 "ffdhe3072", 00:37:38.059 "ffdhe4096", 00:37:38.059 "ffdhe6144", 00:37:38.059 "ffdhe8192" 00:37:38.059 ] 00:37:38.059 } 00:37:38.059 }, 00:37:38.059 { 00:37:38.059 "method": "bdev_nvme_attach_controller", 00:37:38.059 "params": { 00:37:38.059 "name": "nvme0", 00:37:38.060 "trtype": "TCP", 00:37:38.060 "adrfam": "IPv4", 00:37:38.060 "traddr": "127.0.0.1", 00:37:38.060 "trsvcid": "4420", 00:37:38.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:38.060 "prchk_reftag": false, 00:37:38.060 "prchk_guard": false, 00:37:38.060 "ctrlr_loss_timeout_sec": 0, 00:37:38.060 "reconnect_delay_sec": 0, 00:37:38.060 "fast_io_fail_timeout_sec": 0, 00:37:38.060 "psk": "key0", 00:37:38.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:38.060 "hdgst": false, 00:37:38.060 "ddgst": false, 00:37:38.060 "multipath": "multipath" 00:37:38.060 } 00:37:38.060 }, 00:37:38.060 { 00:37:38.060 "method": "bdev_nvme_set_hotplug", 00:37:38.060 "params": { 00:37:38.060 "period_us": 100000, 00:37:38.060 "enable": false 00:37:38.060 } 00:37:38.060 }, 00:37:38.060 { 00:37:38.060 "method": "bdev_wait_for_examine" 00:37:38.060 } 00:37:38.060 ] 00:37:38.060 }, 00:37:38.060 { 00:37:38.060 "subsystem": "nbd", 00:37:38.060 "config": [] 00:37:38.060 } 00:37:38.060 ] 00:37:38.060 }' 00:37:38.321 [2024-10-08 18:53:32.159220] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:37:38.321 [2024-10-08 18:53:32.159278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559149 ] 00:37:38.321 [2024-10-08 18:53:32.236055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.321 [2024-10-08 18:53:32.288512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.582 [2024-10-08 18:53:32.431613] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:39.154 18:53:32 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:39.154 18:53:32 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:39.154 18:53:32 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:39.154 18:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.154 18:53:32 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:39.154 18:53:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:39.154 18:53:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:39.154 18:53:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:39.154 18:53:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:39.154 18:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:39.154 18:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:39.154 18:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.414 18:53:33 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:39.414 18:53:33 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:39.414 18:53:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:39.414 18:53:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:39.414 18:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:39.414 18:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.414 18:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:39.414 18:53:33 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:39.414 18:53:33 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:39.414 18:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:39.414 18:53:33 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:39.675 18:53:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:39.675 18:53:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:39.675 18:53:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.hhlQrCUATm /tmp/tmp.luWdsGiX1f 00:37:39.675 18:53:33 keyring_file -- keyring/file.sh@20 -- # killprocess 1559149 00:37:39.675 18:53:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1559149 ']' 00:37:39.675 18:53:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1559149 00:37:39.675 18:53:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:39.675 18:53:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:39.675 18:53:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1559149 00:37:39.675 18:53:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:39.675 18:53:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:39.675 18:53:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1559149' 00:37:39.675 killing process with pid 1559149 00:37:39.675 18:53:33 keyring_file -- common/autotest_common.sh@969 -- # kill 1559149 00:37:39.675 Received shutdown signal, test time was about 1.000000 seconds 00:37:39.675 00:37:39.675 Latency(us) 00:37:39.675 [2024-10-08T16:53:33.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:39.675 [2024-10-08T16:53:33.732Z] =================================================================================================================== 00:37:39.675 [2024-10-08T16:53:33.732Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:39.675 18:53:33 keyring_file -- common/autotest_common.sh@974 -- # wait 1559149 00:37:39.935 18:53:33 keyring_file -- keyring/file.sh@21 -- # killprocess 1557313 00:37:39.935 18:53:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1557313 ']' 00:37:39.935 18:53:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1557313 00:37:39.935 18:53:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:39.935 18:53:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:39.935 18:53:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1557313 00:37:39.935 18:53:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:39.935 18:53:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:39.935 18:53:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1557313' 00:37:39.935 killing process with pid 1557313 00:37:39.935 18:53:33 keyring_file -- common/autotest_common.sh@969 -- # kill 1557313 00:37:39.935 18:53:33 keyring_file -- common/autotest_common.sh@974 -- # wait 1557313 00:37:40.196 00:37:40.196 real 0m11.916s 00:37:40.196 user 0m28.830s 00:37:40.196 sys 0m2.590s 00:37:40.196 18:53:34 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:40.196 18:53:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:40.196 ************************************ 00:37:40.196 END TEST keyring_file 00:37:40.196 ************************************ 00:37:40.196 18:53:34 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:37:40.196 18:53:34 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:40.196 18:53:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:40.196 18:53:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:40.196 18:53:34 -- common/autotest_common.sh@10 -- # set +x 00:37:40.196 ************************************ 00:37:40.196 START TEST keyring_linux 00:37:40.196 ************************************ 00:37:40.196 18:53:34 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:40.196 Joined session keyring: 161567380 00:37:40.457 * Looking for test storage... 00:37:40.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:40.457 18:53:34 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:40.457 18:53:34 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:37:40.457 18:53:34 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:40.457 18:53:34 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:40.457 18:53:34 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:40.457 18:53:34 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:40.457 18:53:34 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:40.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.457 --rc genhtml_branch_coverage=1 00:37:40.457 --rc genhtml_function_coverage=1 00:37:40.457 --rc genhtml_legend=1 00:37:40.457 --rc geninfo_all_blocks=1 00:37:40.457 --rc geninfo_unexecuted_blocks=1 00:37:40.457 00:37:40.457 ' 00:37:40.457 18:53:34 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:40.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.457 --rc genhtml_branch_coverage=1 00:37:40.457 --rc genhtml_function_coverage=1 00:37:40.457 --rc genhtml_legend=1 00:37:40.457 --rc geninfo_all_blocks=1 00:37:40.457 --rc geninfo_unexecuted_blocks=1 00:37:40.457 00:37:40.457 ' 00:37:40.457 18:53:34 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:40.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.457 --rc genhtml_branch_coverage=1 00:37:40.457 --rc genhtml_function_coverage=1 00:37:40.457 --rc genhtml_legend=1 00:37:40.457 --rc geninfo_all_blocks=1 00:37:40.457 --rc geninfo_unexecuted_blocks=1 00:37:40.457 00:37:40.457 ' 00:37:40.457 18:53:34 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:40.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.457 --rc genhtml_branch_coverage=1 00:37:40.457 --rc genhtml_function_coverage=1 00:37:40.457 --rc genhtml_legend=1 00:37:40.457 --rc geninfo_all_blocks=1 00:37:40.457 --rc geninfo_unexecuted_blocks=1 00:37:40.457 00:37:40.457 ' 00:37:40.457 18:53:34 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:40.457 18:53:34 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:40.457 18:53:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:40.458 18:53:34 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:40.458 18:53:34 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:40.458 18:53:34 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:40.458 18:53:34 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:40.458 18:53:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.458 18:53:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.458 18:53:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.458 18:53:34 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:40.458 18:53:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:40.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:40.458 18:53:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:40.458 18:53:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:40.458 18:53:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:40.458 18:53:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:40.458 18:53:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:40.458 18:53:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:40.458 /tmp/:spdk-test:key0 00:37:40.458 18:53:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:40.458 18:53:34 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:40.458 18:53:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:40.458 /tmp/:spdk-test:key1 00:37:40.458 18:53:34 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:40.458 18:53:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1559596 00:37:40.458 18:53:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1559596 00:37:40.458 18:53:34 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1559596 ']' 00:37:40.458 18:53:34 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.458 18:53:34 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:40.458 18:53:34 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.458 18:53:34 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:40.458 18:53:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:40.720 [2024-10-08 18:53:34.537026] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:37:40.720 [2024-10-08 18:53:34.537099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559596 ] 00:37:40.720 [2024-10-08 18:53:34.618017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.720 [2024-10-08 18:53:34.675177] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:41.291 18:53:35 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:41.291 18:53:35 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:41.291 18:53:35 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:41.291 18:53:35 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.291 18:53:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:41.552 [2024-10-08 18:53:35.351347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.552 null0 00:37:41.552 [2024-10-08 18:53:35.383397] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:41.552 [2024-10-08 18:53:35.383736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:41.552 18:53:35 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.552 18:53:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:41.552 596858669 00:37:41.552 18:53:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:41.552 455705338 00:37:41.552 18:53:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1559924 00:37:41.552 18:53:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1559924 /var/tmp/bperf.sock 00:37:41.552 18:53:35 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:41.552 18:53:35 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1559924 ']' 00:37:41.552 18:53:35 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:41.552 18:53:35 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:41.552 18:53:35 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:41.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:41.552 18:53:35 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:41.552 18:53:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:41.552 [2024-10-08 18:53:35.461866] Starting SPDK v25.01-pre git sha1 6f51f621d / DPDK 24.03.0 initialization... 00:37:41.552 [2024-10-08 18:53:35.461914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559924 ] 00:37:41.552 [2024-10-08 18:53:35.538416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.552 [2024-10-08 18:53:35.591695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.493 18:53:36 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:42.493 18:53:36 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:42.493 18:53:36 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:42.493 18:53:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:42.493 18:53:36 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:42.493 18:53:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:42.754 18:53:36 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:42.754 18:53:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:42.754 [2024-10-08 18:53:36.764173] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:43.015 nvme0n1 00:37:43.015 18:53:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:43.015 18:53:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:43.015 18:53:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:43.015 18:53:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:43.015 18:53:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:43.015 18:53:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.015 18:53:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:43.015 18:53:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:43.015 18:53:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:43.015 18:53:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:43.015 18:53:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.015 18:53:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.015 18:53:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:43.275 18:53:37 keyring_linux -- keyring/linux.sh@25 -- # sn=596858669 00:37:43.275 18:53:37 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:43.275 18:53:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:43.275 18:53:37 keyring_linux -- keyring/linux.sh@26 -- # [[ 596858669 == \5\9\6\8\5\8\6\6\9 ]] 00:37:43.275 18:53:37 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 596858669 00:37:43.275 18:53:37 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:43.275 18:53:37 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:43.275 Running I/O for 1 seconds... 00:37:44.659 24309.00 IOPS, 94.96 MiB/s 00:37:44.659 Latency(us) 00:37:44.659 [2024-10-08T16:53:38.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.659 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:44.659 nvme0n1 : 1.01 24308.77 94.96 0.00 0.00 5249.57 4396.37 9175.04 00:37:44.659 [2024-10-08T16:53:38.716Z] =================================================================================================================== 00:37:44.659 [2024-10-08T16:53:38.716Z] Total : 24308.77 94.96 0.00 0.00 5249.57 4396.37 9175.04 00:37:44.659 { 00:37:44.659 "results": [ 00:37:44.659 { 00:37:44.659 "job": "nvme0n1", 00:37:44.659 "core_mask": "0x2", 00:37:44.659 "workload": "randread", 00:37:44.659 "status": "finished", 00:37:44.659 "queue_depth": 128, 00:37:44.659 "io_size": 4096, 00:37:44.659 "runtime": 1.005275, 00:37:44.659 "iops": 24308.771231752504, 00:37:44.659 "mibps": 94.95613762403322, 00:37:44.659 "io_failed": 0, 00:37:44.659 "io_timeout": 0, 00:37:44.659 "avg_latency_us": 5249.565032259824, 00:37:44.659 "min_latency_us": 4396.373333333333, 00:37:44.659 "max_latency_us": 9175.04 00:37:44.659 } 00:37:44.659 ], 00:37:44.659 "core_count": 1 00:37:44.659 } 00:37:44.659 18:53:38 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:44.659 18:53:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:44.659 18:53:38 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:44.659 18:53:38 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:44.659 18:53:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:44.659 18:53:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:44.659 18:53:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:44.659 18:53:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.659 18:53:38 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:44.659 18:53:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:44.659 18:53:38 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:44.659 18:53:38 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:44.659 18:53:38 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:37:44.659 18:53:38 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:44.659 18:53:38 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:44.659 18:53:38 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:44.659 18:53:38 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:44.659 18:53:38 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:44.659 18:53:38 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:44.659 18:53:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:44.920 [2024-10-08 18:53:38.862999] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:44.920 [2024-10-08 18:53:38.863970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa80830 (107): Transport endpoint is not connected 00:37:44.920 [2024-10-08 18:53:38.864966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa80830 (9): Bad file descriptor 00:37:44.920 [2024-10-08 18:53:38.865968] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:44.920 [2024-10-08 18:53:38.865980] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:44.920 [2024-10-08 18:53:38.865986] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:44.920 [2024-10-08 18:53:38.865992] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:44.920 request: 00:37:44.920 { 00:37:44.920 "name": "nvme0", 00:37:44.920 "trtype": "tcp", 00:37:44.920 "traddr": "127.0.0.1", 00:37:44.920 "adrfam": "ipv4", 00:37:44.920 "trsvcid": "4420", 00:37:44.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:44.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:44.920 "prchk_reftag": false, 00:37:44.920 "prchk_guard": false, 00:37:44.920 "hdgst": false, 00:37:44.920 "ddgst": false, 00:37:44.920 "psk": ":spdk-test:key1", 00:37:44.920 "allow_unrecognized_csi": false, 00:37:44.920 "method": "bdev_nvme_attach_controller", 00:37:44.920 "req_id": 1 00:37:44.920 } 00:37:44.920 Got JSON-RPC error response 00:37:44.920 response: 00:37:44.920 { 00:37:44.920 "code": -5, 00:37:44.920 "message": "Input/output error" 00:37:44.920 } 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@33 -- # sn=596858669 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 596858669 00:37:44.920 1 links removed 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@33 -- # sn=455705338 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 455705338 00:37:44.920 1 links removed 00:37:44.920 18:53:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1559924 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1559924 ']' 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1559924 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1559924 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1559924' 00:37:44.920 killing process with pid 1559924 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@969 -- # kill 1559924 00:37:44.920 Received shutdown signal, test time was about 1.000000 seconds 00:37:44.920 00:37:44.920 Latency(us) 00:37:44.920 [2024-10-08T16:53:38.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.920 [2024-10-08T16:53:38.977Z] =================================================================================================================== 00:37:44.920 [2024-10-08T16:53:38.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:44.920 18:53:38 keyring_linux -- common/autotest_common.sh@974 -- # wait 1559924 00:37:45.180 18:53:39 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1559596 00:37:45.180 18:53:39 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1559596 ']' 00:37:45.180 18:53:39 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1559596 00:37:45.180 18:53:39 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:45.180 18:53:39 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:45.180 18:53:39 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1559596 00:37:45.180 18:53:39 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:45.180 18:53:39 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:45.180 18:53:39 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1559596' 00:37:45.180 killing process with pid 1559596 00:37:45.180 18:53:39 keyring_linux -- common/autotest_common.sh@969 -- # kill 1559596 00:37:45.180 18:53:39 keyring_linux -- common/autotest_common.sh@974 -- # wait 1559596 00:37:45.441 00:37:45.441 real 0m5.198s 00:37:45.441 user 0m9.616s 00:37:45.441 sys 0m1.451s 00:37:45.441 18:53:39 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:45.441 18:53:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:45.441 ************************************ 00:37:45.441 END TEST keyring_linux 00:37:45.441 ************************************ 00:37:45.441 18:53:39 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:45.441 18:53:39 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:37:45.441 18:53:39 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:45.441 18:53:39 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:45.441 18:53:39 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:37:45.441 18:53:39 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:37:45.441 18:53:39 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:37:45.441 18:53:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:45.441 18:53:39 -- common/autotest_common.sh@10 -- # set +x 00:37:45.441 18:53:39 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:37:45.441 18:53:39 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:45.441 18:53:39 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:45.441 18:53:39 -- common/autotest_common.sh@10 -- # set +x 00:37:53.579 INFO: APP EXITING 00:37:53.579 INFO: killing all VMs 00:37:53.579 INFO: killing vhost app 00:37:53.579 INFO: EXIT DONE 00:37:56.880 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:56.880 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:56.880 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:57.141 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:57.141 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:57.141 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:01.349 Cleaning 00:38:01.349 Removing: /var/run/dpdk/spdk0/config 00:38:01.349 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:01.349 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:01.349 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:01.349 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:01.349 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:01.349 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:01.349 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:01.349 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:01.349 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:01.349 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:01.349 Removing: /var/run/dpdk/spdk1/config 00:38:01.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:01.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:01.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:01.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:01.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:01.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:01.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:01.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:01.349 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:01.349 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:01.349 Removing: /var/run/dpdk/spdk2/config 00:38:01.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:01.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:01.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:01.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:01.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:01.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:01.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:01.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:01.349 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:01.349 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:01.349 Removing: /var/run/dpdk/spdk3/config 00:38:01.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:01.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:01.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:01.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:01.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:01.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:01.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:01.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:01.349 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:01.349 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:01.349 Removing: /var/run/dpdk/spdk4/config 00:38:01.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:01.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:01.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:01.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:01.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:01.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:01.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:01.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:01.350 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:01.350 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:01.350 Removing: /dev/shm/bdev_svc_trace.1 00:38:01.350 Removing: /dev/shm/nvmf_trace.0 00:38:01.350 Removing: /dev/shm/spdk_tgt_trace.pid981014 00:38:01.350 Removing: /var/run/dpdk/spdk0 00:38:01.350 Removing: /var/run/dpdk/spdk1 00:38:01.350 Removing: /var/run/dpdk/spdk2 00:38:01.350 Removing: /var/run/dpdk/spdk3 00:38:01.350 Removing: /var/run/dpdk/spdk4 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1002802 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1008274 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1020965 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1021804 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1027111 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1027469 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1032939 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1040085 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1043178 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1056164 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1067881 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1069923 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1070942 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1092134 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1097215 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1154137 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1160751 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1167699 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1175717 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1175790 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1176793 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1177815 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1178856 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1179542 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1179544 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1179870 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1179910 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1180026 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1181089 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1182088 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1183164 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1183751 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1183874 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1184097 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1185485 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1186751 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1196805 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1231330 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1236799 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1238798 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1241052 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1241218 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1241505 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1241848 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1242580 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1244919 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1246034 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1246715 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1249432 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1250139 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1250878 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1256304 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1263371 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1263372 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1263374 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1268224 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1278754 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1283594 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1290871 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1292369 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1294074 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1295746 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1301568 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1306674 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1316532 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1316636 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1321901 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1322233 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1322404 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1322907 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1322917 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1328676 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1329187 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1334749 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1338004 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1344559 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1351163 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1361454 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1370705 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1370759 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1394044 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1394859 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1395572 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1396254 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1397315 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1398006 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1398803 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1399596 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1404806 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1405140 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1412363 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1412618 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1419700 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1424879 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1436492 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1437159 00:38:01.350 Removing: /var/run/dpdk/spdk_pid1442350 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1442723 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1447896 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1454854 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1457912 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1470906 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1481778 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1483756 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1484888 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1504711 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1509494 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1512681 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1520673 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1520765 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1527148 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1529559 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1531751 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1533148 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1535458 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1536886 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1547072 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1547734 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1548276 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1551167 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1551726 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1552392 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1557313 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1557339 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1559149 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1559596 00:38:01.611 Removing: /var/run/dpdk/spdk_pid1559924 00:38:01.611 Removing: /var/run/dpdk/spdk_pid979528 00:38:01.611 Removing: /var/run/dpdk/spdk_pid981014 00:38:01.611 Removing: /var/run/dpdk/spdk_pid981863 00:38:01.611 Removing: /var/run/dpdk/spdk_pid982904 00:38:01.611 Removing: /var/run/dpdk/spdk_pid983242 00:38:01.611 Removing: /var/run/dpdk/spdk_pid984307 00:38:01.611 Removing: /var/run/dpdk/spdk_pid984491 00:38:01.611 Removing: /var/run/dpdk/spdk_pid984780 00:38:01.611 Removing: /var/run/dpdk/spdk_pid985923 00:38:01.611 Removing: /var/run/dpdk/spdk_pid986638 00:38:01.611 Removing: /var/run/dpdk/spdk_pid986998 00:38:01.611 Removing: /var/run/dpdk/spdk_pid987338 00:38:01.611 Removing: /var/run/dpdk/spdk_pid987706 00:38:01.611 Removing: /var/run/dpdk/spdk_pid988039 00:38:01.611 Removing: /var/run/dpdk/spdk_pid988358 00:38:01.611 Removing: /var/run/dpdk/spdk_pid988706 00:38:01.611 Removing: /var/run/dpdk/spdk_pid989099 00:38:01.611 Removing: /var/run/dpdk/spdk_pid990191 00:38:01.611 Removing: /var/run/dpdk/spdk_pid993766 00:38:01.611 Removing: /var/run/dpdk/spdk_pid994136 00:38:01.611 Removing: /var/run/dpdk/spdk_pid994505 00:38:01.611 Removing: /var/run/dpdk/spdk_pid994829 00:38:01.611 Removing: /var/run/dpdk/spdk_pid995208 00:38:01.611 Removing: /var/run/dpdk/spdk_pid995395 00:38:01.611 Removing: /var/run/dpdk/spdk_pid995913 00:38:01.611 Removing: /var/run/dpdk/spdk_pid995930 00:38:01.611 Removing: /var/run/dpdk/spdk_pid996291 00:38:01.611 Removing: /var/run/dpdk/spdk_pid996603 00:38:01.611 Removing: /var/run/dpdk/spdk_pid996668 00:38:01.611 Removing: /var/run/dpdk/spdk_pid996998 00:38:01.611 Removing: /var/run/dpdk/spdk_pid997448 00:38:01.611 Removing: /var/run/dpdk/spdk_pid997797 00:38:01.611 Removing: /var/run/dpdk/spdk_pid998203 00:38:01.873 Clean 00:38:01.873 18:53:55 -- common/autotest_common.sh@1451 -- # return 0 00:38:01.873 18:53:55 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:01.873 18:53:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:01.873 18:53:55 -- common/autotest_common.sh@10 -- # set +x 00:38:01.873 18:53:55 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:01.873 18:53:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:01.873 18:53:55 -- common/autotest_common.sh@10 -- # set +x 00:38:01.873 18:53:55 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:01.873 18:53:55 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:01.873 18:53:55 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:01.873 18:53:55 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:01.873 18:53:55 -- spdk/autotest.sh@394 -- # hostname 00:38:01.873 18:53:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:02.135 geninfo: WARNING: invalid characters removed from testname! 00:38:28.721 18:54:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:30.631 18:54:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:32.539 18:54:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:33.921 18:54:27 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:35.833 18:54:29 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:37.215 18:54:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:39.124 18:54:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:39.124 18:54:32 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:38:39.124 18:54:32 -- common/autotest_common.sh@1681 -- $ lcov --version 00:38:39.124 18:54:32 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:38:39.124 18:54:32 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:38:39.124 18:54:32 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:38:39.124 18:54:32 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:38:39.124 18:54:32 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:38:39.124 18:54:32 -- scripts/common.sh@336 -- $ IFS=.-: 00:38:39.124 18:54:32 -- scripts/common.sh@336 -- $ read -ra ver1 00:38:39.124 18:54:32 -- scripts/common.sh@337 -- $ IFS=.-: 00:38:39.124 18:54:32 -- scripts/common.sh@337 -- $ read -ra ver2 00:38:39.124 18:54:32 -- scripts/common.sh@338 -- $ local 'op=<' 00:38:39.124 18:54:32 -- scripts/common.sh@340 -- $ ver1_l=2 00:38:39.124 18:54:32 -- scripts/common.sh@341 -- $ ver2_l=1 00:38:39.124 18:54:32 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:38:39.124 18:54:32 -- scripts/common.sh@344 -- $ case "$op" in 00:38:39.124 18:54:32 -- scripts/common.sh@345 -- $ : 1 00:38:39.124 18:54:32 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:38:39.124 18:54:32 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:39.124 18:54:32 -- scripts/common.sh@365 -- $ decimal 1 00:38:39.124 18:54:32 -- scripts/common.sh@353 -- $ local d=1 00:38:39.124 18:54:32 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:38:39.124 18:54:32 -- scripts/common.sh@355 -- $ echo 1 00:38:39.124 18:54:32 -- scripts/common.sh@365 -- $ ver1[v]=1 00:38:39.124 18:54:32 -- scripts/common.sh@366 -- $ decimal 2 00:38:39.124 18:54:32 -- scripts/common.sh@353 -- $ local d=2 00:38:39.124 18:54:32 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:38:39.124 18:54:32 -- scripts/common.sh@355 -- $ echo 2 00:38:39.124 18:54:32 -- scripts/common.sh@366 -- $ ver2[v]=2 00:38:39.124 18:54:32 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:38:39.124 18:54:32 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:38:39.124 18:54:32 -- scripts/common.sh@368 -- $ return 0 00:38:39.124 18:54:32 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:39.124 18:54:32 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:38:39.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.124 --rc genhtml_branch_coverage=1 00:38:39.124 --rc genhtml_function_coverage=1 00:38:39.124 --rc genhtml_legend=1 00:38:39.124 --rc geninfo_all_blocks=1 00:38:39.124 --rc geninfo_unexecuted_blocks=1 00:38:39.124 00:38:39.124 ' 00:38:39.124 18:54:32 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:38:39.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.124 --rc genhtml_branch_coverage=1 00:38:39.124 --rc genhtml_function_coverage=1 00:38:39.124 --rc genhtml_legend=1 00:38:39.124 --rc geninfo_all_blocks=1 00:38:39.124 --rc geninfo_unexecuted_blocks=1 00:38:39.124 00:38:39.124 ' 00:38:39.124 18:54:32 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:38:39.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.124 --rc genhtml_branch_coverage=1 00:38:39.124 --rc genhtml_function_coverage=1 00:38:39.124 --rc genhtml_legend=1 00:38:39.124 --rc geninfo_all_blocks=1 00:38:39.124 --rc geninfo_unexecuted_blocks=1 00:38:39.124 00:38:39.124 ' 00:38:39.124 18:54:32 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:38:39.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.124 --rc genhtml_branch_coverage=1 00:38:39.124 --rc genhtml_function_coverage=1 00:38:39.124 --rc genhtml_legend=1 00:38:39.124 --rc geninfo_all_blocks=1 00:38:39.124 --rc geninfo_unexecuted_blocks=1 00:38:39.124 00:38:39.124 ' 00:38:39.124 18:54:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:39.124 18:54:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:38:39.124 18:54:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:39.124 18:54:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:39.124 18:54:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:39.124 18:54:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.124 18:54:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.124 18:54:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.124 18:54:32 -- paths/export.sh@5 -- $ export PATH 00:38:39.124 18:54:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.124 18:54:32 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:39.124 18:54:32 -- common/autobuild_common.sh@486 -- $ date +%s 00:38:39.124 18:54:32 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728406472.XXXXXX 00:38:39.124 18:54:32 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728406472.z6q7NZ 00:38:39.124 18:54:32 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:38:39.124 18:54:32 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:38:39.124 18:54:32 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:38:39.124 18:54:32 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:39.124 18:54:32 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:39.124 18:54:32 -- common/autobuild_common.sh@502 -- $ get_config_params 00:38:39.124 18:54:32 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:38:39.124 18:54:32 -- common/autotest_common.sh@10 -- $ set +x 00:38:39.124 18:54:32 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:38:39.124 18:54:32 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:38:39.124 18:54:32 -- pm/common@17 -- $ local monitor 00:38:39.124 18:54:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.124 18:54:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.124 18:54:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.124 18:54:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.124 18:54:32 -- pm/common@21 -- $ date +%s 00:38:39.124 18:54:32 -- pm/common@25 -- $ sleep 1 00:38:39.124 18:54:32 -- pm/common@21 -- $ date +%s 00:38:39.124 18:54:32 -- pm/common@21 -- $ date +%s 00:38:39.124 18:54:32 -- pm/common@21 -- $ date +%s 00:38:39.124 18:54:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728406472 00:38:39.124 18:54:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728406472 00:38:39.124 18:54:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728406472 00:38:39.124 18:54:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728406472 00:38:39.124 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728406472_collect-cpu-load.pm.log 00:38:39.124 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728406472_collect-vmstat.pm.log 00:38:39.125 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728406472_collect-cpu-temp.pm.log 00:38:39.125 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728406472_collect-bmc-pm.bmc.pm.log 00:38:40.067 18:54:33 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:38:40.067 18:54:33 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:38:40.067 18:54:33 -- spdk/autopackage.sh@14 -- $ timing_finish 00:38:40.067 18:54:33 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:40.067 18:54:33 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:40.067 18:54:33 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:40.067 18:54:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:40.067 18:54:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:40.067 18:54:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:40.067 18:54:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:40.067 18:54:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:40.067 18:54:33 -- pm/common@44 -- $ pid=1573543 00:38:40.067 18:54:33 -- pm/common@50 -- $ kill -TERM 1573543 00:38:40.067 18:54:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:40.067 18:54:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:40.067 18:54:33 -- pm/common@44 -- $ pid=1573544 00:38:40.067 18:54:33 -- pm/common@50 -- $ kill -TERM 1573544 00:38:40.067 18:54:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:40.067 18:54:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:40.067 18:54:33 -- pm/common@44 -- $ pid=1573546 00:38:40.067 18:54:33 -- pm/common@50 -- $ kill -TERM 1573546 00:38:40.067 18:54:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:40.067 18:54:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:40.067 18:54:33 -- pm/common@44 -- $ pid=1573571 00:38:40.067 18:54:33 -- pm/common@50 -- $ sudo -E kill -TERM 1573571 00:38:40.067 + [[ -n 894345 ]] 00:38:40.067 + sudo kill 894345 00:38:40.079 [Pipeline] } 00:38:40.094 [Pipeline] // stage 00:38:40.100 [Pipeline] } 00:38:40.113 [Pipeline] // timeout 00:38:40.118 [Pipeline] } 00:38:40.131 [Pipeline] // catchError 00:38:40.136 [Pipeline] } 00:38:40.150 [Pipeline] // wrap 00:38:40.155 [Pipeline] } 00:38:40.168 [Pipeline] // catchError 00:38:40.176 [Pipeline] stage 00:38:40.178 [Pipeline] { (Epilogue) 00:38:40.190 [Pipeline] catchError 00:38:40.192 [Pipeline] { 00:38:40.204 [Pipeline] echo 00:38:40.206 Cleanup processes 00:38:40.211 [Pipeline] sh 00:38:40.501 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:40.501 1573681 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:40.501 1574241 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:40.514 [Pipeline] sh 00:38:40.801 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:40.801 ++ grep -v 'sudo pgrep' 00:38:40.801 ++ awk '{print $1}' 00:38:40.801 + sudo kill -9 1573681 00:38:40.813 [Pipeline] sh 00:38:41.101 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:53.340 [Pipeline] sh 00:38:53.628 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:53.629 Artifacts sizes are good 00:38:53.642 [Pipeline] archiveArtifacts 00:38:53.649 Archiving artifacts 00:38:53.781 [Pipeline] sh 00:38:54.066 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:54.081 [Pipeline] cleanWs 00:38:54.091 [WS-CLEANUP] Deleting project workspace... 00:38:54.091 [WS-CLEANUP] Deferred wipeout is used... 00:38:54.098 [WS-CLEANUP] done 00:38:54.099 [Pipeline] } 00:38:54.114 [Pipeline] // catchError 00:38:54.124 [Pipeline] sh 00:38:54.541 + logger -p user.info -t JENKINS-CI 00:38:54.550 [Pipeline] } 00:38:54.560 [Pipeline] // stage 00:38:54.563 [Pipeline] } 00:38:54.573 [Pipeline] // node 00:38:54.577 [Pipeline] End of Pipeline 00:38:54.607 Finished: SUCCESS